added
stringlengths
24
24
created
stringlengths
23
23
id
stringlengths
3
9
metadata
dict
source
stringclasses
1 value
text
stringlengths
1.56k
316k
version
stringclasses
1 value
2020-05-18T01:00:17.886Z
2020-05-15T00:00:00.000
218665656
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1140/epja/s10050-020-00249-y.pdf", "pdf_hash": "1fd1d7fd805b1779748e8d907c418ef81f272a18", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:6", "s2fieldsofstudy": [ "Physics" ], "sha1": "1fd1d7fd805b1779748e8d907c418ef81f272a18", "year": 2020 }
pes2o/s2orc
Subleading contributions to the nuclear scalar isoscalar currents We extend our recent analyses of the nuclear vector, axial-vector and pseudoscalar currents and derive the leading one-loop corrections to the two-nucleon scalar current operator in the framework of chiral effective field theory using the method of unitary transformation. We also show that the scalar current operators at zero momentum transfer are directly related to the quark mass dependence of the nuclear forces. I. INTRODUCTION The first principles description of nuclei, nuclear matter and reactions is one of the great challenges in contemporary physics with applications ranging from low-energy searches for physics beyond the Standard Model (SM) to properties of neutron stars and neutron star mergers. The currently most efficient and feasible approach along this line relies on the application of suitably taylored effective field theories (EFTs). In particular, an extension of chiral perturbation theory to multi-nucleon systems [1,2], commonly referred to as chiral EFT, has been applied over the last two decades to derive nuclear forces at high orders in the EFT expansion in harmony with the spontaneously broken approximate chiral symmetry of QCD [3,4]. See Refs. [5,6] for the most accurate and precise chiral two-nucleon interactions at fifth order and Refs. [7][8][9][10][11] for a collection of review articles describing the current state-of-the-art in chiral EFT for nuclear forces and selected applications. In parallel with these developments, current operators describing the interactions of nuclear systems with external vector, axial-vector and pseudoscalar sources needed to study electroweak reactions driven by a single photon-or W /Z-boson exchange have been worked out completely through fourth order in the heavy-baryon formulation of chiral EFT with pions and nucleons as the only dynamical degrees of freedom, see Refs. [12,13] for the pioneering studies by Park et al., for our calculations using the method of unitary transformation [18][19][20] and Refs. [21][22][23][24] for an independent derivation by the Jlab-Pisa group in the framework of time-ordered perturbation theory. A direct comparison of the expressions for the current operators derived by different group is hindered by their scheme dependence. However, at least for the two-pion exchange axial-vector currents, our results [16] appear to be not unitarily equivalent to the ones of the Pisa-Jlab group [23], see Ref. [25] for a detailed discussion of the box diagram contribution. We further emphasize that offshell consistency of the electroweak operators derived by our group [14][15][16][17] and the corresponding (unregularized) two- [26,27] and three-nucleon forces [28,29] has been verified explicitly by means of the corresponding continuity equations in Refs. [16,17]. Here and in what follows, we assume exact isospin symmetry with m u = m d ≡ m q . Embedded in the SM, the interactions between quarks and the external vector and axial-vector sources are probed in electroweak reactions involving hadrons or nuclei. Low-energy nuclear systems are nowadays commonly described by solving the many-body Schrödinger equation with the nuclear forces derived in chiral EFT [3,4,7]. An extension to electroweak processes with nuclei requires the knowledge of the corresponding nuclear current operators defined in terms of the functional derivatives of the effective nuclear Hamiltonian in the presence of external fields with respect to v µ (x) and a µ (x) [16]. For the vector, axial-vector and pseudoscalar sources, the corresponding expressions are already available up to fourth chiral order [14][15][16][17]. In this work we focus on the response of nuclear systems to the external scalar source s(x) and thus set v µ = a µ = p = 0. While the scalar currents cannot be probed experimentally within the SM due to the absence of scalar sources, they figure prominently in dark matter (DM) searches in a wide variety of DM models such as e.g. Higgs-portal DM and weakly-interacting massive particles (WIMPs), see [35][36][37] for recent review articles. For example, the dominant interactions of a spin-1/2 Dirac-fermion DM particle χ with the strong sector of the SM is given by the Lagrangian where i denotes the flavor quantum number, G a µν is the gluon field strength, α s is the strong coupling constant and the couplings c i (c G ) determine the strength of the interaction between χ and quarks of flavor i (gluons). Notice that the contributions from coupling to heavy quarks (charm, bottom and top) can be integrated out [38] and the sum in Eq. (1.2) can thus be taken only over the light quark flavors by replacing the coupling constants c i , c G with the corresponding effective ones. Thus, the scalar nuclear currents derived in our paper can be used to describe the interactions of nuclei with DM particles emerging from their isoscalar coupling to the up-and down-quarks ∝ (c u +c d ). Apart from their relevance for DM searches, the scalar currents are intimately related to quark mass dependence of hadronic and nuclear observables. For example, the pion-nucleon σ-term, σ πN , corresponds to the isoscalar scalar form factor of the nucleon at zero momentum transfer times the quark mass and determines the amount of the nucleon mass generated by the up-and down-quarks. Its value has been accurately determined from the recent Roy-Steiner-equation analysis of pion-nucleon scattering accompanied with pionic hydrogen and deuterium data to be σ πN = (59.1 ± 3.5) MeV [39]. For the status of lattice QCD calculations of σ πN see Ref. [40]. As pointed out, however, in Ref. [41], there is relation between the σ-term and the S-wave πN scattering lengths that so far has not been checked for the lattice calculations. Nuclear σ-terms and scalar form factors of light nuclei have also been studied in lattice QCD, albeit presently at unphysically large quark masses [42,43]. Interestingly, the scalar matrix elements were found in these studies to be strongly affected by nuclear effects (in contrast to the axial-vector and tensor charges), which indicates that scalar exchange currents may play an important role. Last but not least, as will be shown below, the scalar isoscalar currents are directly related to the quark mass dependence of the nuclear forces, a subject that gained a lot of attention in the EFT community in connection with ongoing lattice QCD efforts in the multibaryon sector [44][45][46][47][48][49][50][51][52], a conjectured infrared renormalization group limit cycle in QCD [53,54], searches for possible temporal variation of the light quark masses [55,56] and anthropic considerations related to the famous Hoyle state in 12 C [57][58][59][60]. Clearly, nuclear scalar currents have already been studied before in the framework of chiral EFT, see e.g. [61][62][63][64][65][66][67][68]. For the two-nucleon currents, only the dominant contribution at the chiral order Q −2 stemming from the one-pion exchange has been considered so far. Here and in what follows, Q ∈ {M π /Λ b , p/Λ b } denotes the chiral expansion parameter, M π is the pion mass, p refers to the magnitude of three-momenta of external nucleons, while Λ b denotes the breakdown scale of the chiral expansion. For a detailed discussion of the employed power counting scheme for nuclear currents see Ref. [16]. The two-body scalar current is suppressed by just one power of the expansion parameter Q relative to the dominant one-body contribution. Such an enhancement relative to the generally expected suppression of (A+1)-nucleon operators relative to the dominant A-nucleon terms by Q 2 can be traced back to the vertex structure of the effective Lagrangian and is not uncommon. For example, one-and two-nucleon operators contribute at the same order to the axial charge and electromagnetic current operators, see Table II of Ref. [16] and Table 1 of Ref. [17], respectively. For the scalar operator, the relative enhancement of the two-body terms is caused by the absence of one-body contributions at the expected leading order Q −4 , see e.g. Table III of Ref. [16] for the hierarchy of the pseudoscalar currents. The first corrections to the scalar current appear at order Q −2 from the leading one-loop diagrams involving a single-nucleon line [63]. In this paper we derive the subleading contributions to the two-nucleon scalar isoscalar current operators at order Q 0 . While the one-body current is not yet available at the same accuracy level, using empirical information on the scalar form factor of the nucleon from lattice QCD instead of relying on its strict chiral expansion may, in the future, provide a more reliable and efficient approach. A similar strategy is, in fact, commonly used in studies of electromagnetic processes, see e.g. [69,70] and Ref. [71] for a recent example. Our paper is organized as follows. In section II, we briefly describe the derivation of the current operator using the method of unitary transformation and provide explicit expressions for the leading (i.e. order-Q −2 ) and subleading (i.e. order-Q 0 ) two-body contributions. Next, in section III, we establish a connection between the scalar currents at zero momentum transfer and the quark mass dependence of the nuclear force. The obtained results are briefly summarized in section IV, while some further technical details and the somewhat lengthy expressions for the two-pion exchange contributions are provided in appendices A and B. II. TWO-NUCLEON SCALAR OPERATORS The derivation of the nuclear currents from the effective chiral Lagrangian using the method of unitary transformation is described in detail in Ref. [16]. The explicit form of the effective Lagrangian in the heavy-baryon formulation can be found in Refs. [72] and [73] for the pionic and pion-nucleon terms, respectively. The relevant terms in L N N will be specified in section II D. As already pointed out above, for the purpose of this study we switch off all external sources except the scalar one, s(x). To derive the scalar currents consistent with the nuclear potentials in Refs. [26-29, 31, 32] and electroweak currents in Refs. [14][15][16][17], we first switch from the effective pion-nucleon Lagrangian to the corresponding Hamiltonian H[s] using the canonical formalism and then apply the unitary transformations U Okubo , U η and U [s]. Here and in what follows, we adopt the notation of Ref. [16]. In particular, the Okubo transformations U Okubo [18] is a "minimal" unitary transformation needed to derive nuclear forces by decoupling the purely nucleonic subspace η from the rest of the pion-nucleon Fock space in the absence of external sources. However, as found in Refs. [31], the resulting nuclear potentials ηU † Okubo HU Okubo η, with η denoting the projection operator onto the η-space, are non-renormalizable starting from next-to-next-to-next-to-leading order (N 3 LO) Q 4 . 1 To obtain renormalized nuclear potentials, a more general class of unitary operators was employed in Refs. [31,32] by performing additional transformations U η on the η-space. The explicit form of the "strong" unitary operators U Okubo and U η up to next-to-next-to-leading order (N 2 LO) can be found in Refs. [28,29,31,32]. Nuclear currents can, in principle, be obtained by switching on the external classical sources in the effective Lagrangian, performing the same unitary transformations U Okubo U η as in the strong sector, and taking functional derivatives with respect to the external sources. However, similarly to the above mentioned renormalization problem with the nuclear potentials, the current operators obtained in this way can, in general, not be renormalized. A renormalizable formulation of the current operators requires the introduction of an even more general class of unitary transformation by performing subsequent η-space rotations with the unitary operators, whose generators depend on the external sources. In Refs. [16] and [17], are explicitly given up to N 2 LO. Notice that such unitary transformations are necessarily time-dependent through the dependence of their generators on the external sources. This, in general, induces the dependence of the corresponding current operators on the energy transfer and results in additional terms in the continuity equations [16]. We now follow the same strategy for the scalar currents and introduce additional η-space unitary transformations U [s], U [s] s=mq = η, in order to obtain renormalizable currents. The most general form of the operator U [s] at the chiral order we are working with is given in appendix A and is parametrized in terms of four real phases α s i , i = 0, . . . , 3. The nuclear scalar current is defined via . Solid, dashed and wiggly lines denote nucleons, pions and external scalar sources, in order. Solid dots denote the leading-order vertices from the effective Lagrangians L see [16] for notation. While all the phases remain unfixed, they do not show up in the resulting expressions for the nuclear current given in the following sections. To the order we are working, we therefore do not see any unitary ambiguity. A. Contributions at orders Q −2 The chiral expansion of the 2N scalar isoscalar current starts at order Q −2 . The dominant contribution is well known to emerge from the one-pion exchange diagram shown in Fig. 1 and has the form where g A and F π are the nucleon axial-vector coupling and pion decay constants, respectively, and q i = p i − p i denotes the momentum transfer of nucleon i. Further, σ i (τ i ) refer to the spin (isospin) Pauli matrices of nucleon i. Here and in what follows, we follow the notation of our paper [16]. In terms of the Fock-space operatorŜ 2N , the expressions we give correspond to the matrix elements where p i ( p i ) refers to the initial (final) momentum of nucleon i, k is the momentum of the external scalar source and the nucleon states are normalized according to the nonrelativistic relation . Finally, we emphasize that the dependence of the scalar currents on m q , which is renormalization-scale dependent, reflects the fact that in our convention, the external scalar source s(x) couples to the QCD densityqq rather than m qq q. Thus, only the combination m qŜ2N (k) is renormalization-scale independent. This is completely analogous to the pseudoscalar currents derived in Ref. [16], and we refer the reader to that work for more details. B. One-pion-exchange contributions at order Q 0 Given that the first corrections to the pionic vertices are suppressed by two powers of the expansion parameter and the absence of vertices in L (2) πN involving the scalar source and a single pion, the first corrections to the two-nucleon current appear at order Q 0 . In Fig. 2 we show all one-loop one-pion-exchange diagrams of non-tadpole type that contribute to the scalar current at this order. Similarly, the corresponding tadpole and tree-level diagrams yielding nonvanishing contributions are visualized in Fig. 3. It should be understood that the diagrams we show here and in what follows do, in general, not correspond to Feynman graphs and serve for the purpose of visualizing the corresponding types of contributions to the operators. The meaning of the diagrams is specific to the method of unitary transformation, see [16] for details. Using dimensional regularization, replacing all bare low-energy constants (LECs) l i and d i in terms of their renormalized valuesl i andd i as defined in Eq. (2.118) of [16], and expressing the results in terms of physical parameters F π , M π and g A , see e.g. [15], leads to our final result for the static order-Q 0 contributions to the 2N . (2.6) e loop function L(q) is defined as apart from the static contributions, we need to take into account for the leading relativistic corrections from tree-level diagrams with a single insertion of 1/m-vertices from the Lagrangian L ⇡N . We stress again to the employed counting for the nucleon mass with m ⇠ ⇤ 2 b /M ⇡ , these contributions are shifted one order lative to the ones emerging from tree-level diagrams with a single insertion of the c i -vertices from L . (2.6) Here, the loop function L(q) is defined as Finally, apart from the static contributions, we need to take into account for the leading relativistic corrections emerging from tree-level diagrams with a single insertion of 1/m-vertices from the Lagrangian L ⇡N . We stress again that due to the employed counting for the nucleon mass with m ⇠ ⇤ 2 b /M ⇡ , these contributions are shifted one order higher relative to the ones emerging from tree-level diagrams with a single insertion of the c i -vertices from L . Here, the loop function L(q) is defined as Finally, apart from the static contributions, we need to take into acc emerging from tree-level diagrams with a single insertion of 1/m-vertice that due to the employed counting for the nucleon mass with m ⇠ ⇤ 2 b /M higher relative to the ones emerging from tree-level diagrams with a sing in the second line of Fig. 1 one-pion-exchange scalar current operators: where the scalar functions o i (k) are given by (2.6) and the loop function L(k) is defined as Finally, apart from the static contributions, we need to take into account the leading relativistic corrections emerging from tree-level diagrams with a single insertion of the 1/m-vertices from the Lagrangian L πN . Given our standard counting scheme for the nucleon mass m ∼ Λ 2 b /M π , see e.g. [16], these contributions are shifted from the order Q −1 to Q 0 . However, the explicit evaluation of diagrams emerging from a single insertion of the 1/m-vertices into the one-pion-exchange graph in Fig. 1 leads to a vanishing result. Given the relation between the scalar current operator and the nuclear forces discussed in section III, this observation is consistent with the absence of relativistic corrections in the (energy-independent formulation of the) nuclear forces at next-to-leading order. Last but not least, there are no contributions proportional to the energy transfer k 0 which may appear from the explicit time dependence of the unitary transformations in diagrams shown in Fig. 2. C. Two-pion-exchange contributions We now turn to the two-pion exchange contributions. In Fig. 4, we show all diagrams yielding non-vanishing results for the scalar current operator with two exchanged pions. The final results for the two-pion exchange operators read where the scalar functions t i (k, q 1 , q 2 ) are expressed in terms of the three-point function. Their explicit form is given in appendix B. Notice that the (logarithmic) ultraviolet divergences in the two-pion exchange contributions are absorbed into renormalization of the LECs from L (2) N N described in the next section. D. Short-range contributions Finally, we turn to the contributions involving short-range interactions. In Fig. 5, we show all one-loop and tree-level diagrams involving a single insertion of the contact interactions that yield non-vanishing contributions to the scalar current. The relevant terms in the effective Lagrangian have the form [32,44] where N is the heavy-baryon notation for the nucleon field with velocity v µ , S µ = −γ 5 [γ µ , γ ν ]v ν /4 is the covariant spin-operator, χ + = 2B u † (s + ip)u † + u(s − ip)u , B, C S,T and D S,T are LECs 2 , . . . denotes the trace in the flavor space, u = √ U , and the 2×2 matrix U collects the pion fields. Further, the ellipses refer to other terms that are not relevant for our discussion of the scalar current operator. The total contribution of the diagrams of Fig. 5 can, after renormalization, be written in the form with the scalar functions s i (k) defined by (2.11) The renormalized, scale-independent LECsD S ,D T are related to the bare ones D S , D T according to with the corresponding β-functions given by 13) and the quantity λ defined as where γ E = −Γ (1) 0.577 is the Euler constant, d the number of space-time dimensions and µ is the scale of dimensional regularization. Clearly, the C T -independent parts of the β-functions emerge from the two-pion exchange contributions discussed in the previous section. Notice that the LECs C S , C T ,D S andD T also contribute to the 2N potential. However, experimental data on nucleon-nucleon scattering do not allow one to disentangle the M π -dependence of the contact interactions and only constrain the linear combinations of the LECs [44] The LECsD S andD T can, in principle, be determined once reliable lattice QCD results for two-nucleon observables such as e.g. the 3 S 1 and 1 S 0 scattering lengths at unphysical (but not too large) quark masses are available, see Refs. [60] and references therein for a discussion of the current status of research along this line. Last but not least, we found, similarly to the one-pion exchange contributions, no 1/m-corrections and no energydependent short-range terms at the order we are working. Notice further that the loop contributions to the contact interactions are numerically suppressed due to the smallness of the LEC C T as a consequence of the approximate SU(4) Wigner symmetry [74,75]. III. SCALAR CURRENT AT ZERO MOMENTUM TRANSFER If the four-momentum transfer k µ of the scalar current is equal zero, one can directly relate the current to the quarkmass derivative of the nuclear Hamiltonian. To see this, we first rewrite the definition of the scalar current in Eq. (2.2) in the form where the nuclear Hamiltonian H eff is defined as and the unitary transformation U [s] satisfies by construction Notice that the last term in the brackets in Eq. (2.2) vanishes for k 0 = 0. On the other hand, we obtain Given the trivial relation the right-most terms in Eqs. (3.1) and (3.4) are equal, and we obtain the relation At the order we are working both commutators in this equation vanish (independently on the choice of unitary phases) leading to In appendix C we demonstrate the validity of Eq. (3.7) for the two-nucleon potential at NLO, see Ref. [44] for the calculation of the quark mass dependence of nuclear forces using the method of unitary transformation. It is important to emphasize that on the energy shell, i.e. when taking matrix elements in the eigenstates |i and |f of the Hamiltonian H eff corresponding to the same energy, all contributions from the commutator in Eq. (3.6) vanish leading to the exact relation For eigenstates |Ψ corresponding to a discrete energy E, H eff |Ψ = E|Ψ , the Feynman-Hellmann theorem allows one to interpret the scalar form factor at zero momentum transfer in terms of the eigenenergy slope with respect to the quark mass: In particular, for |Ψ being a single-nucleon state at rest, the expectation value on left-hand side of Eq. (3.9) is nothing but the pion-nucleon sigma-term and for an extension to resonances |R , see e.g. Ref. [76]. IV. SUMMARY AND CONCLUSIONS In this paper we have analyzed in detail the subleading contributions to the nuclear scalar isoscalar current operators in the framework of heavy-baryon chiral effective field theory. These corrections are suppressed by two powers of the expansion parameter Q relative to the well-known leading-order contribution, see Eq. (2.3). They comprise the one-loop corrections to the one-pion-exchange and the lowest-order NN contact interactions as well as the leading two-pion exchange contributions. No three-and more-nucleon operators appear at the considered order. While the two-pion exchange terms do not involve any unknown parameters, the one-pion exchange contribution depends on a poorly known πN LECd 16 related to the quark mass dependence of the nucleon axial coupling g A . It can, in principle, be determined from lattice QCD simulations, see [77,78] for some recent studies. The short-range part of the scalar current depends on two unknown LECs which parametrize the quark-mass dependence of the derivative-less NN contact interactions. In principle, these LECs can be extracted from the quark-mass dependence of, say, the NN scattering length, see Refs. [44-46, 48, 50-52] for a related discussion. Finally, we have explicitly demonstrated that the scalar current operator at vanishing four-momentum transfer is directly related to the quark-mass dependence of the nuclear force. The results obtained in our work are relevant for ongoing DM searches and for matching to lattice QCD calculations in the few-nucleon sector, see e.g. [42,43] for recent studies along this line. It is important to emphasize that our calculations are carried out using dimensional regularization. For nuclear physics applications, the obtained expressions for the scalar current operator need to be regularized consistently with the nuclear forces, which is a nontrivial task, see Refs. [7,79] for a discussion. Work along these lines using the invariant higher derivative regularization [80] is in progress. Acknowledgments We are grateful to Martin Hoferichter and Jordy de Vries for sharing their insights into these topics. At the order we are working, the general structure of the unitary operator U [s] can be written as 2,2 η, Here and in what follows, we use the notation of Ref. [16]. Furthermore, S (κ) n,p denotes an interaction from the Hamiltonian with a single insertion of the scalar current s(x) − m q 3 , n nucleon and p pion fields. The superscripts κ refer to the inverse mass dimension of the corresponding coupling constant given by where d, n and p denote the number of derivatives or pion mass insertions at a given vertex, number of nucleon and pion fields, respectively. Further, c v , c a , c p and c s refer to the number of external vector, axial-vector, pseudoscalar and scalar sources, in order. (C.5) It is important to emphasize that in Ref. [44], the short-range LECsD S andD T have been shifted to absorb all momentum-independent contributions generated by the two-pion-exchange. The corresponding shifts forD S andD T are given byD (C.6) Performing the same shifts in the scalar current and using L(0) = 1 and Eq. (B.5) we indeed verify:
v3-fos-license
2018-04-03T00:19:23.688Z
2014-02-18T00:00:00.000
10805381
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0084167&type=printable", "pdf_hash": "4c4d91bc4aed3ca1fe5a489122e013016c5b8e81", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:7", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4c4d91bc4aed3ca1fe5a489122e013016c5b8e81", "year": 2014 }
pes2o/s2orc
Factors Associated with Impairment of Quadriceps Muscle Function in Chinese Patients with Chronic Obstructive Pulmonary Disease Background Quadriceps muscle dysfunction is well confirmed in chronic obstructive pulmonary disease (COPD) and reported to be related to a higher risk of mortality. Factors contributing to quadriceps dysfunction have been postulated, while not one alone could fully explain it and there are few reports on it in China. This study was aimed to investigate the severity of quadriceps dysfunction in patients with COPD, and to compare quadriceps muscle function in COPD and the healthy elderly. Methods Quadriceps strength and endurance capabilities were investigated in 71 COPD patients and 60 age-matched controls; predicted values for quadriceps strength and endurance were calculated using regression equations (incorporating age, gender, anthropometric measurements and physical activities), based on the data from controls. Potential parameters related to quadriceps dysfunction in COPD were identified by stepwise regression analysis. Results Mean values of quadriceps strength was 46% and endurance was 38% lower, in patients with COPD relative to controls. Gender, physical activities and anthropometric measurements were predictors to quadriceps function in the controls. While in COPD, forced expiratory volume in 1 second percentage of predicted value (FEV1% pred), nutritional depletion, gender and physical inactivity were identified as independent factors to quadriceps strength (R2 = 0.72); FEV1%pred, thigh muscle mass, serum levels of tumor necrosis factor-alpha (TNF-α) and gender were correlated to quadriceps endurance variance, with each p<0.05. Conclusion Quadriceps strength and endurance capabilities are both substantially impaired in Chinese COPD patients, with strength most affected. For the controls, physical activity is most important for quadriceps function. While for COPD patients, quadriceps dysfunction is related to multiple factors, with airflow limitation, malnutrition and muscle disuse being the main ones. Introduction Skeletal muscle dysfunction is well documented as an important systemic manifestation of chronic obstructive pulmonary disease (COPD) and has been recognized as a contributing factor in reduced exercise capacity, impaired quality of life and higher health-care utilization. Furthermore, COPD patients exhibit significant reductions in functional mobility and balance that may affect their ability to perform the activities of daily life. It has been suggested that these reductions in functional performance are related to the muscle dysfunction present in these patients [1]. It is of great interest that the pattern of limb muscle impairment in COPD is different from that seen in the respiratory muscles [2]. There is increasing interest in the role of peripheral skeletal muscles in COPD as it is a potential site of intervention for improving the patient's functional status. Among the reports, quadriceps function assessment was used in the majority of the studies for assessment of peripheral muscles' function as it is readily accessible and is a primary locomotor muscle. Quadriceps dysfunction can be considered in terms of loss of both muscle strength and endurance. Current opinion suggests that the reduction in muscle strength is due to a reduction in cross sectional area while the loss of endurance is due to muscle fiber type changes [3]. Quadriceps muscle dysfunction has been reported to be associated with decreased survival [4], poor functional status, and low quality of life [5]. More importantly, quadriceps muscle strength can better predict mortality than measures of lung function in this population [6]. Improved quadriceps muscle strength and endurance are recognized to underlie much of the increased exercise capacity observed following multidisciplinary pulmonary rehabilitation for COPD [7]. Thus, better understanding of the severity and the factors associated with impairment of quadriceps muscle function in patients with COPD would help develop new preventive interventions in quadriceps dysfunction and therapeutic approaches in the rehabilitation of these patients. However, there are few reports on quadriceps dysfunction in China though there are a large population of COPD patients in China [8]. In recent years, factors related to quadriceps dysfunction have been postulated, such as systemic inflammation, muscle wasting, muscle disuse etc.; while no one can fully explain quadriceps muscle dysfunction in COPD. Therefore, this study was aimed at investigating quadriceps dysfunction in Chinese patients with COPD, and to explore the related underlying factors. We examined in detail the quadriceps' function and endurance, the patients' nutritional status, muscle mass and physical activity and the presence of two cytokines; tumor necrosis factor-alpha (TNF-a) and C-reactive protein (CRP) to examine the potential systemic inflammatory response. In addition, the present study aimed to investigate the predictors for quadriceps functional capabilities in the healthy elderly across an age range comparable to that typically observed in COPD. Subject selection The current research was approved by the Research Committee of Human Investigation of Guangzhou Medical University (Approval number: 2011-21). Informed written consent was obtained from each participant. 71 patients with stable COPD were recruited from the outpatients' clinics of Guangzhou Institute of Respiratory Disease (Guangzhou, China) between Mar 2007 to Jun 2009. The diagnosis of COPD was made according to the criteria recommended by the GOLD guidelines [9] with spirometric confirmation of irreversible airflow limitation with post-bronchodilator forced expiratory volume for 1 second (FEV 1 )/forced vital capacity (FVC) ,70%. Significant comorbidities were excluded by medical history, physical check-up and conventional laboratory investigations. All the COPD patients involved in the current study were ex-smokers with abstinence for more than 3 years. Subject exclusion criteria included history of exacerbation in the preceding 3 months, co-morbidities of cardiac, rheumatologic or neuromuscular disorders or unwillingness to participate in the study. Most of the COPD patients were on inhaled corticosteroids (400-800 ug budesonide equivalent dose/ day), none of them were on regular systemic corticosteroids otherwise they would have been excluded; about 15% of patients were naïve to inhaled corticosteroids. 60 subjects for the control group were recruited from the health check-up department of First Affiliated Hospital of Guangzhou Medical University. The criteria for inclusion in the control group were as follows: (1) aged matched to the study group with COPD; (2) normal spirometry; (3) without any respiratory symptoms or other disease affecting quadriceps function; (4) non-smoker or has abstained from smoking for more than 10 years. Methods Quadriceps function assessment. Quadriceps functional tests included strength and endurance performance. The quadriceps isometric maximal voluntary contraction force (QMVC) test was performed using the technique described by a previous report [10], with a specially designed chair ( Figure 1). The chair was designed for the following four functions: first, the chair was immovable with being fixed on the floor, while it was comfortable enough and its armrest was strong enough for subjects to exert their maximal force in a sitting position by wrapping their fingers around the armrest; second, there was a strain gauge and load cell (Strainstall, Cowes, UK) installed under the chair so that the quadriceps force could be measured, and the height of the strain load cell was parallel to the ankle of the subjects; third, the strain load cell could be easily dissembled for everyday calibration; fourth, the back of the chair was movable so it could be changed into a bed by laying the back flat for subjects to lie down if necessary. The tests were performed with the subject in a sitting position at 90u hip flexion and knee flexed at 90u over the end of the chair. An inextensible strap was attached around the subject's right leg just superior to the malleoli of the ankle joint. The strap was connected to the strain load cell that was calibrated after each test with weights of known amounts. Subjects were required to try to extend their dominant leg as hard as possible against the inextensible strap. A computer screen was in front of the subjects in order that the force generated was visible to subjects and investigator, so the computer screen served as a positive feedback to help subjects to perform the test. Repeated efforts were made with vigorous encouragement until there was no improvement in the performance, and each effort was sustained for about 3-5 seconds. If maximal values were reproducible (,10% variability) for a consecutive 3 times, i.e., the generated strength reached a plateau, the highest value of the 3 contractions was considered as QMVC [11]. Surface electromyography (sEMG) was recorded for quadriceps muscles of vastus lateralis (VL), rectus femoris (RF), and vastus medialis (VM). The output signals of force and sEMG were recorded via an analogue-digital instrument (Powerlab 8/ 16SP Instruments, Austin, TX, USA) and a personal computer (Apple Computer Inc., Cupertino, CA, USA) running Chart 5.1 software. The quadriceps sEMG amplitude recordings were quantified by using the root-mean-square (RMS). Typical signals of QMVC and sEMG from a normal male subject are shown in Figure 2. Endurance time. Endurance of the quadriceps was evaluated during an isometric contraction. After 10 minutes of rest following the QMVC maneuvers, subjects were instructed to maintain a tension representing 55%,60% of their own QMVC until exhaustion. The feedback mechanism served by the computer screen helped subjects to maintain the determined submaximal tension. Subjects were strongly encouraged to pursue until tension dropped to 50% QMVC or less for more than 3 seconds. Thus, quadriceps endurance was defined as the time to fatigue (QTf), and the time at which the isometric contraction at 60% of maximal voluntary capacity could no longer be sustained. A sample signal from a normal male control is shown in Figure 3. Nutritional status assessment. The nutritional status of all subjects was evaluated by using an integrated approach with a modified multiparameter nutritional index (MNI) [12], which consisted of anthropometric measurements and visceral protein levels. Anthropometric measurements included body weight, triceps skinfold thickness (TSF), mid-arm muscle circumference (MAMC). MAMC = mid-arm circumference -p6TSF. Albumin, transferrin and prealbumin plasma concentration were used as visceral protein levels. The MNI score was calculated by: MNI = a+b+c+d+e+f (0 to15). Table 1 shows the variables and point values used for the computation of MNI score. Quadriceps muscle mass evaluation. The quadriceps muscle mass was evaluated indirectly by anthropometric measurements of the legs [13]. Measurements consisted of quadriceps skin-fold thickness (S) and thigh girth of the subjects. With the patient standing and his weight evenly distributed, the thigh girth was determined on the nondominant side at half the distance between the inguinal crease and a point midway along the patella. Thigh muscle mass was thus evaluated as skinfold corrected thigh girth (CTG), and CTG = mid-thigh girth-pS. Determination of cytokines. In serum, levels of TNF-a, and CRP were determined. TNF-a level was determined by ELISA kits (Quantikine; R&D Systems, Minneapolis, MN), and CRP by ELISA kits (CHEMICON Int CO. USA). Level of daily physical activity. The level of daily physical activity (PA) was assessed by using a PA questionnaire [14] adapted for the elderly in China. The questionnaire on habitual PA consisted of 19 items, scored the past 3 year's household activities, sports activities, and other physically active leisure-time activities and gave an overall PA score. An intensity code based on net energetic costs of activities according to Voorrips et al [15] was used to classify each activity. The subjects were asked to describe type of the PA, hours per week spent on it, and period of the year in which the PA was normally performed. Statistical analyses Statistical analysis was performed using SPSS 13.0 (SPSS; Chicago, IL) and Minitab 16.0 (Minitab; Techmax Inc.) statistical package for windows. Measurement data were summarized by mean6SD, and categorical data were summarized by number (percentage). P value ,0.05 was considered statistically significant. Two independent-sample t-tests and Chi-square test were used for univariate testing between COPD patients and control subjects. In both control and COPD groups, multiple regression models were developed by stepwise method to determine factors independently contributing to quadriceps strength and endurance, respectively. In the stepwise regression analysis, Alpha-to-Enter 0.15 and Alpha-to-Remove 0.15 were included. Characteristics of the subjects The general characteristics of the control subjects and patients with COPD are shown in Table 2. Age and height were Nutritional status and anthropometric data All nutritional variables were within the reference values in almost all the control subjects, but below the reference values in most of the COPD patients. MNI sore was significantly higher in COPD patients than in controls ( Level of daily life physical activities The questionnaire for daily life PA showed that nobody had ever participated in a rehabilitation program among all the participants, while PA scores were significantly lower in COPD patients when compared to controls (table 2), showing a decreased physical activity among the patients. Quadriceps functional assessment As expected, there was a gender-related difference in quadriceps strength; QMVC was significantly decreased in females than in males for both COPD patients and controls. When compared with gender-matched controls, the mean values of QMVC and QTf were both significantly reduced in COPD patients, which were also demonstrated for RMS amplitudes of the sEMG signals from VL, RF and VM muscles during the QMVC test (table 3). The mean value of quadriceps strength and endurance was 46% and 38% lower, respectively, in COPD patients than in controls. Among patients with COPD, there was neither a significant difference in QMVC between the steroid-naive patients and those Serum levels of TNF-a and CRP Serum levels of TNF-a were significantly increased in patients when compared to controls [(6.9862.50) pg/ml vs. Correlations In normal subjects, multivariate stepwise regression analysis suggested that QMVC was predicted by sex (0.386), PA scores (0.279) and weight (0.305), with R 2 of 0.61 (p,0.0001); for endurance time, PA scores (0.519), CTG (0.374), weight (0.617) and sex (20.985) were the contributors to QTf variance, with R 2 of 0.58 (p,0.001). Based on the regression analysis results, we derived 2 predictive equations for quadriceps strength and endurance time from the healthy elderly with an age range from 58-76 years old. The equations describing predicted QMVC force (kg) and QTf (S) were, as follows: In COPD patients, multiple stepwise regression analysis identified that sex, FEV 1 %pred, MNI and PA scores are statistically significant predictors, together explaining 72% of QMVC variance. For endurance time, FEV 1 %pred, CTG, serum TNF-a levels, and sex were predictors to QTf, explaining 44% of the QTf variance. Table 4 shows the factors correlated with QMVC and QTf in COPD patients. Table 5 and table 6 show the standardized coefficients (b) for each predictor variable for QMVC and QTf, respectively, obtained from multiple regression analysis in the COPD group. Discussion The main findings of this study are that (1) Both quadriceps strength and endurance capabilities are substantially impaired in Chinese patients with COPD, with strength and endurance being 46% and 38% lower, respectively, in the patients relative to agematched controls; (2) Impairment of quadriceps function correlated with multiple factors, with airflow limitation, malnutrition and muscle disuse taking important roles; (3) Using regression equations generated from a cohort of the healthy elderly across an age range from 58 to 76 years old, we showed that physical activity was an important determinant of quadriceps functional capabilities in healthy individuals. As far as the authors know, this is the first study to characterize quadriceps dysfunction in Chinese patients with COPD and give predictive equations for quadriceps strength and endurance time from the Chinese healthy elderly. Also, this is the first study to investigate the multiple factors related to quadriceps function in both COPD patients as well as in the healthy elderly. In the present study, the expected gender-related difference in quadriceps strength was observed for both COPD patients and controls, and similar data was also reported by Miller et al [16], indicating that gender difference should be taken into account in assessment of quadriceps function. When compared to gender and age-matched controls, the mean value of QMVC was reduced by 47% and 45%, respectively, in female and male patients; the reduction was more severe than previously reported [17], indicating a more remarkable impairment of quadriceps strength in Chinese patients with COPD. But we should elucidate that most of our study patients had severe and very severe airflow limitation. RMS amplitudes of the sEMG signals from VL, RF and VM muscles were also decreased in COPD patients compared to the controls, supporting a significantly decreased strength in the patients, as muscle strength level could be reflected by the amplitudes of the sEMG signals quantified using RMS. For endurance capability, the values of QTf were lowered 39% and 37% in female and male patients, respectively, when compared to controls. Our data showed that quadriceps muscle strength was impaired to a greater extent than endurance in COPD patients, which was in line with the findings of Zattara-Hartmann et al [18]; on the contrary, Van't Hul et al [19] reported greater impairment in endurance for COPD patients. These conflicting results may be attributed to differences in the severity of airflow limitation between the study patients. The patients studied by Van't Hul et al were all in GOLD stage II to III, while most patients in our study were in GOLD stages III to IV. This explanation is supported by the results of regression correlation analysis in the present study, where b coefficient showed that FEV1%pred more significantly correlated with QMVC than with QTf in COPD patients, demonstrating that quadriceps strength is impaired to a larger extent than endurance in COPD. Remarkably, we found that both QMVC and QTf were significantly correlated with FEV1%pred in patients with COPD. Regarding the relationship of airflow limitation with quadriceps function, similar results have been yielded by some previous studies [10,20,21], though conflicting results were also reported by other studies [22][23][24]. Nevertheless, the highest prevalence of quadriceps weakness was observed in those with the most severe airflow obstruction in our study patients with COPD, which was also demonstrated in a large sample of COPD patients in another study [17], demonstrating that there may be an association between airflow limitation and quadriceps muscle dysfunction. The relationship between quadriceps dysfunction and airflow limitation may have multiple potential explanations. First, the increased cost of breathing as a result of the airflow limitation may well be associated with skeletal muscle weakness in COPD, especially of the lower limb muscles. Second, airflow limitation and the resultant greater respiratory muscle work often leads to respiratory muscle fatigue, which, in turn, increases sympathetic vasoconstrictor activity in the working limb via a supraspinal reflex [25]. The result is a decrease in limb blood flow and a corresponding reduction in oxygen delivery to peripheral muscles, which accelerate the development of quadriceps fatigue. Third, due to airflow limitation and the associated sensation of dyspnea, COPD patients often experience a downward spiral of symptom-induced inactivity and even muscle disuse, which in turn causes muscle structure changes and metabolic derangements, such as a shift from type I to type II skeletal muscle fibers [26], reduced mitochondrial density per fiber bundle [27], and reduced capillary density [28]. Each of these can correlate with a reduced capacity for aerobic metabolism and, ultimately, poorer muscle performance. In addition, due to airflow limitation and the associated impaired gas exchange, patients with COPD have chronic hypoxia to a varying degree, thus a compromised oxygen transport to limb locomotor muscles might be expected. Furthermore, hypoxemia may interfere with muscle differentiation and lead to muscle dysfunction via several pathways. For example, it has been shown that hypoxia inhibits myogenic differentiation through accelerated MyoD degradation and via the ubiquitin proteasome pathway [29]. In addition, hypoxemia might affect the contractile apparatus and enhance muscle oxidative stress. As for the nutritional status in the COPD patients, although we recruited the patients randomly at the onset of the study, it turned out that 74.65% patients had decreased body weight, with a mean BMI of less than 21 kg/m 2 and MNI score that was significantly elevated, indicating that malnutrition was prevalent in the COPD patients; moreover, the MNI score correlated inversely with QMVC in our patients. A similar study result has been already reported [30], and there is evidence that nutritional supplementation increases muscle strength [31]. In our previous study [32], we have found that skeletal muscle mass is substantially decreased in COPD patients and muscle wasting is the main manifestation of nutritional depletion. In the present study, CTG was identified as a contributor to QTf in both COPD patients and controls. This finding was in line with the study result of the association between muscle loss and increased muscle fatigability in COPD [33], suggesting that muscle wasting is, at least in part, responsible for impairment of quadriceps endurance in COPD. Muscle wasting is an effect of other pathophysiological changes such as muscle disuse and nutritional depletion. At the same time, the present study derived an equation to predict quadriceps strength and endurance, from the healthy elderly; the data showed a close relationship of PA scores with both QMVC and QTf among control subjects. In the classic description of QMVC measurement in COPD, strength was normalized to body weight [34], and in recent research, QMVC was recognized to be associated with airflow limitation, fat-free mass and age [17]. Partly consistent with that study, our data showed that quadriceps strength was predicted by multiple factors including airflow limitation, nutritional depletion, and muscle wasting and physical inactivity. As far as physical activities were concerned, our study found that the PA score was significantly lower in patients than in controls, which was in keeping with previous studies [35,36]; moreover, our data showed that the PA score had a big effect on both QMVC and QTf in the healthy elderly, based on the standardized b coefficient. Our finding was supported by the classic theory that exercise can improve muscle function while long term inactivity leads to muscle weakness in normal subjects. While in COPD, patients often have a sedentary lifestyle and muscle disuse, which leads to muscle weakness and limited exercise capacity. Our data was also supported by the accumulating study evidence derived from a rehabilitation program, which showed that muscle training can improve muscle strength in COPD patients [37], highlighting a tight link between muscle inactivity and muscle weakness in COPD. In addition, the present study analyzed the TNF-a levels in serum of the all participants, and found that TNF-a levels were significantly elevated in COPD patients relative to controls; moreover, regression analysis identified TNF-a as one of the contributors to QTf in COPD patients. TNF-a has been recognized as an important cytokine in skeletal muscle wasting [38] as it might compromise muscle function by stimulating muscle protein loss or inducing alterations of muscle proteins catabolism. Our findings, together with previous studies, suggest that systemic inflammation takes an important role in the development of quadriceps dysfunction in COPD. Although one study has found that TNF-a muscle protein levels are decreased in COPD [39] other studies that looked at sputum samples agree with these results [40,41]. With regard to CRP, our study failed to show a significant difference between COPD patients and the controls. In contrast, Broekhuizen et al [42] reported elevated CRP levels in advanced COPD. CRP is an acute-phase reactant, while patients in our study had been stable for 3 months. This may explain why CRP levels were not elevated in our study. Study Limitations The current study has several limitations. First, we had the small size of female patients with COPD and relatively small size of male controls, which may offset the accuracy of our study results. Larger scale studies should be conducted in the future to further improve the accuracy of the study results. Second, our patients with COPD were on a variety of inhaled corticosteroids (ICS), which might modify the quadriceps function, thereby interfering with the results. Finally, most of the patients in our study had severe or very severe COPD, such that these data cannot be generalized to patients with mild or moderate disease. Further studies are needed to address these issues. Conclusions The present study investigated quadriceps muscle strength and endurance in COPD patients as well as in age-matched healthy elderly; it turned out that the value of quadriceps strength and endurance was 46% and 38% lower, respectively, in COPD patients relative to controls. We draw the conclusion that quadriceps dysfunction is correlated with multiple factors, with airflow limitation, nutritional depletion and muscle disuse taking important roles in its development; while physical activity contributes most to quadriceps function in the healthy elderly.
v3-fos-license
2019-02-17T14:20:07.089Z
2018-05-02T00:00:00.000
64702668
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/archive/2018/3140309.pdf", "pdf_hash": "6ea8305f6433ae49005f03b8cd33d74a42491bde", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:8", "s2fieldsofstudy": [ "Engineering", "Environmental Science" ], "sha1": "6ea8305f6433ae49005f03b8cd33d74a42491bde", "year": 2018 }
pes2o/s2orc
Low-Cost SCADA System Using Arduino and Reliance SCADA for a Stand-Alone Photovoltaic System SCADA (supervisory control and data acquisition) systems are currently employed inmany applications, such as home automation, greenhouse automation, and hybrid power systems. Commercial SCADA systems are costly to set up andmaintain; therefore those are not used for small renewable energy systems. This paper demonstrates applying Reliance SCADA and Arduino Uno on a small photovoltaic (PV) power system to monitor the PV current, voltage, and battery, as well as efficiency. The designed system uses low-cost sensors, an Arduino Uno microcontroller, and free Reliance SCADA software. The Arduino Uno microcontroller collects data from sensors and communicates with a computer through aUSB cable. Uno has been programmed to transmit data to Reliance SCADA on PC. In addition, Modbus library has been uploaded on Arduino to allow communication between the Arduino and our SCADA system by using MODBUS RTU protocol. The results of the experiments demonstrate that SCADA works in real time and can be effectively used in monitoring a solar energy system. Introduction For several hundred years, fossil fuels have been consumed as the main source of energy on Earth.As a result, they are now experiencing rapid depletion.Researchers and scientists who understand the importance of renewable energy have dedicated their efforts to the research, expansion, and deployment of new energy sources to replace fossil fuels. Photovoltaics (PV) are an important renewable energy sources.Also called solar cells, PV are electronic devices that can convert sunlight directly into electricity.The modern forms of PV were developed at Bell Telephone Laboratories in 1954 [1].Despite their promising performance, PV have some limitations, such as depending on factors like longitude, latitude, and weather and being limited to daytime hours to generate power [2]. The SCADA system is software that has been installed in several sites to monitor and control processes, and it is called telemetry importance [3,4].SCADA can monitor real-time electrical data measurements of solar module and batteries and collect data from wind turbines, such as the condition of the gearbox, blades, and electric system [5,6].Moreover, the sun-tracker system has also used the SCADA system to observe the solar insolation and movement of the sun [6]. These days, commercial companies are widespread for monitoring systems such as photovoltaic systems.However, those are quite expensive.For example, SMA Company is a German Company, and it was founded in 1981.It has many products.Some of them are related to monitoring and controlling, for example, Sunny View.It can show all of your system data in good condition, and we can read all data clearly.However, the major problem is that the device is costly; it costs about CA $793 [7,8]. A previous study also shows a data acquisition and visualization system, with storage in the cloud, and it has been applied on a photovoltaic system.In addition, this design was based on embedded computer, and it connected with PV inverters by using RS485 standard, and microcontroller is to read climate sensors but sensors have used web system to show data [9]. Also, a study shows a low-cost monitoring system, it is presented in [10].The system has determined losses in energy production.The paper is based on multiple wireless sensors and low cost, and it used voltage, current, irradiation, and temperature sensors which are installed on PV modules as well. In this paper, designed SCADA system is of lower price compared with commercial SCADA system, and it delivers the same performance.In order to test this work, the SCADA system is employed for monitoring the parameters of solar energy systems (photovoltaic) in real time, which consist of a solar module, MPPT, and batteries.The parameters are the current and voltage of the photovoltaic (PV) system and the current and voltage of the battery.Data acquisition system is by Arduino controller and sensors.All data are sent to a PC and are shown on a user interface designed by Reliance SCADA.The data are saved on a computer as an Excel file as well.This allows users and operators to monitor the parameters of the PV system in real time.The components of the SCADA system in this paper consist of two parts: hardware and software. Hardware Design The proposed Reliance SCADA is designed to monitor the parameters of a small PV system.It is installed at the Department of Electrical Engineering, Memorial University, St. John's, Canada.Figure 1 shows 12 solar panels up to 130 watts and 7.6 amp.Two solar modules are connected in parallel.Therefore, the system shown in Figure 1 consists of 6 sets of 260 watts each.The Reliance SCADA system was designed to be of low cost and can be expanded or modified without the need for major hardware changes in the future.Basic elements of the design are an Arduino Uno controller and sensors, as shown in Figure 2. Arduino Uno Microcontroller. Arduino Uno is opensource hardware that is relatively easy to use. Figure 3 shows Arduino Uno, while before MPPT to measure the PV voltage and the other is installed after MPPT to measure the battery voltage.Figure 5 demonstrates how it connects in an electrical circuit with Arduino Uno. Hardware Setup Figure 6 shows the hardware setup designed for the SCADA system. Arduino IDE. IDE is open-source software that features easy-to-write code that can be uploaded to any board.In this work, we needed to upload a new library on IDE to make a configuration between Arduino Uno and SCADA software by MODBUS RTU protocol.Figure 7 shows how the system works and also shows the code that has been burned on Arduino Uno. (B) Code.The code has some main functions such as setup() (it is called once when the sketch starts) and loop() (it is called over and over and is heart of sketch).The most important in the code are libraries mentioned initially: regBank.setId()command, regBank.add(),and regBank.set().The purpose of libraries is to connect between Arduino Uno and Reliance SCADA software by MODBUS RTU protocol.regBan.setId() is used to define MODBUS to work as slave.regBank.add()command is used to define addresses of registers which are used to send data to Reliance SCADA on computer.In this work, the addresses were from 30001 to 30005 as mentioned slave.run();}}4.2.Reliance SCADA.Reliance software is employed in numerous technologies for monitoring and controlling systems.It can also be used for connecting to a smartphone or the web.Reliance is used in many colleges and universities around the world for education or scientific research purposes [12].Figure 8 shows a user interface designed by Reliance SCADA software to monitor the parameters of the photovoltaic system. The user interface has four real-time trends and four display icons to show values as digital numbers.In addition, it Number Variable name MODBUS RTU address (1) Voltage of photovoltaic system 0 (2) Current of photovoltaic system 1 (3) Voltage of battery 2 (4) Current of battery 3 (5) Efficiency of MPPT 4 has two buttons and a container.These features are discussed in Results and Discussion. Communication System MODBUS library is added to Arduino Uno to allow communication with Reliance SCADA via a USB cable using MODBUS RTU protocol.Table 2 shows the allocation of MODBUS address for MODBUS RTU on Reliance SCADA software, with the MODBUS address for Arduino Uno mentioned in the Arduino code. Cost of the SCADA System Most factories that use several systems are looking for a low cost SCADA system to monitor and control their systems remotely.In this paper, the components used are quiet cheap. Monitring parameters of PV system using Arduino Uno Voltage Table 3 shows the price (CA dollar) for whole components according to the amazon.cawebsite. According to Table 3, we found that the whole price of SCADA system was CA$82.This price seems cheap to design SCADA system for monitoring parameters of our system. Results and Discussion In this work, the proposed SCADA monitors a solar energy system and several experiments are carried out.The experiments cover the measurement error of the sensor systems which are installed to measure PV current and voltage, battery current and voltage, MMPT efficiency, and SCADA features. The sensors that are used contain errors, so these errors are calculated with calibrated instruments, as listed in Table 4. As can be seen in Table 4, the measurement error of current sensors was the highest.The error percentages of the PV current sensor and the battery current sensor are about 3.42% and 3.10%, respectively.Although the error percentages of both voltage sensors were quite low, they were closer to the calibrated instrument. The monitoring tasks are displayed on the PC.They include the PV parameters as a graph and digital numbers and the MPPT efficiency as digital numbers.Figure 9 shows the user interface of SCADA after the system was operational. The SCADA system is designed to make an update every minute.As shown in Figure 9, there are four figures: two of them observe the PV voltage and current and the other two monitor the battery voltage and current.The figure also shows that the SCADA system makes updates every minute. The user interface of SCADA shows five icons displaying values of parameters as digital numbers, and they also make automatic updates every minute. Our SCADA system has the feature of enabling all data to be easily saved on a computer as an Excel file.To save the data, the user just has to hit the Export-Data icon and then hit the Save-Data icon.These icons are programmed by script to save the data on a PC as an Excel file. Figure 10 shows a screenshot of data saved in Excel. Also, user interface has a container that shows details.The Arduino connects with SCADA, and it gives warning if there is any error in connection. The efficiency of MPPT was also monitored.It represents the output power of MPPT over the input power to MPPT. Figure 11 presents MPPT efficiency for various periods of time, with efficiency ranging between 1 and 0.8. Conclusion In this paper, a low-cost SCADA system was designed and built with Reliance SCADA software and Arduino Uno.The SCADA system was applied to a stand-alone photovoltaic system to monitor the current and voltage of PV and batteries.The results of the experiments demonstrate that SCADA works in real time and can be effectively used in monitoring a solar energy system.The developed system costs less than $100 and can be modified easily for a different PV system. Figure 1 :Figure 2 : Figure 1: Solar panels on the roof of engineering building. Figure 5 : Figure 5: Connection drawing of current sensor. Figure 6 : Figure 6: Hardware setup of SCADA system. Figure 9 : Figure 9: User interface of SCADA while running. [11]e1shows specifications for the hardware.The license gives permission to anyone to improve, build, or expand Arduino.The original Arduino and its enhancement environment were founded in 2005 in Italy at the Smart Project Company.It has 14 digital input/output pins, 6 of which can be used as analog input/output[11]. 2.2.Current Sensor.Current sensors for DC currents must be able to measure a range of currents for PV and batteries between 0 A and 20 A. In this work, an ACS 712 sensor is used for sensing the current.It is designed to be easily used with any microcontroller, such as Arduino.The sensors are based on the Allegro AC712ELC chip.The scale value of ACS 712, which is used in this design, is 20 amp, which is appropriate for sensing current.Two sensors are installed: one a small one.In this work, the voltage sensor is a 25 V-sensor with two resistors of 30 KΩ and 7.3 KΩ.The maximum voltage of either PV or battery is 25 V, so this sensor is appropriate.The output of the voltage sensor is between 0 V and 5 V.This scale is suitable to the Arduino analog inputs.In this experiment, we need two voltage sensors: one is installed Table 1 : Specifications of Arduino board. Table 2 : Allocation of MODBUS address for MODBUS RTU. Table 3 : Price components of SCADA system. Table 4 : Measurement errors of sensor system.
v3-fos-license
2019-01-22T22:23:51.483Z
2018-12-21T00:00:00.000
57013271
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.7717/peerj.6132", "pdf_hash": "4c6ccde31707bd0429229ffa9189cf84161c914a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:9", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "4c6ccde31707bd0429229ffa9189cf84161c914a", "year": 2018 }
pes2o/s2orc
Non-invasive monitoring of glucocorticoid metabolite concentrations in urine and faeces of the Sungazer (Smaug giganteus) Developing non-invasive techniques for monitoring physiological stress responses has been conducted in a number of mammal and bird species, revolutionizing field-based endocrinology and conservation practices. However, studies validating and monitoring glucocorticoid concentrations in reptiles are still limited. The aim of the study was to validate a method for monitoring glucocorticoid metabolite concentrations in urine (uGCM) and faeces (fGCM) of the cordylid lizard, the Sungazer (Smaug giganteus). An adrenocorticotropic hormone (ACTH) challenge was conducted on one male and two females with both urine and faecal material being collected during baseline and post-injection periods. Steroid extracts were analysed with four enzyme immunoassays (EIAs)namely: 11-oxoaetiocholanolone, 5α-pregnane-3β-11β-21-triol-20-one, tetrahydrocorticosterone, and corticosterone. A considerable response in fGCM and uGCM concentrations following ACTH administration was observed in all subjects, with the 5α-pregnane-3β-11β-21-triol-20-one and tetrahydrocorticosterone EIAs appearing to be the most suited for monitoring alterations in glucocorticoid metabolite concentrations in S. giganteus using faeces or urine as hormone matrix. Both EIAs showed a significantly higher concentration of glucocorticoid metabolites in faeces compared to urine for both sexes. Collectively, the findings of this study confirmed that both urine and faeces can be used to non-invasively assess adrenocortical function in S. giganteus. INTRODUCTION Historically, reptiles have been seen as a vertebrate group with limited importance to the natural environment, with the disappearance of the taxa unlikely to make any noteworthy difference (Zim & Smith, 1953). Thankfully, this sentiment has disappeared as scientists realize the importance of reptiles as an integral part of the ecosystem and thus important indicators of environmental quality (Gibbons & Stangel, 1999). Despite research showing that reptile numbers decline on a similar scale in terms of taxonomic breadth, geographic scope and severity as amphibians (Gibbons et al., 2000), reptiles remain one of the least studied vertebrate groups, being considered of less general interest compared to other fauna (Bonnet, Shine & Lourdais, 2002). The cryptic colorations and nature of reptiles (Zug, Vitt & Caldwell, 2001), as well as the general low population numbers inherent to the taxa (Todd, Willson & Gibbons, 2010), often results in a limited number of individuals available to monitor during a study. A number of factors have been suggested to contribute to the currently recognized decline in reptiles globally (see Todd, Willson & Gibbons, 2010 for a review). However, the direct and indirect effects of factors such as global climate change, disease, and habitat pollution are sometimes difficult to quantify and relate to individual and population health and survivability (Gibbons et al., 2000). In this regard, monitoring physiological stress patterns in reptiles may provide an important insight into the susceptibility of reptiles to population declines when faced with various environmental threats. Stress is commonly referred to as the stimulus that may threaten, or appear to threaten, the general homeostasis of an individual (Wielebnowski, 2003;Hulsman et al., 2011). The perception of a stressor leads to the activation of the hypothalamic-pituitary-adrenal (HPA) axis and, consequently, to an increase in glucocorticoid (GC) secretion (Sapolsky, Romero & Munck, 2000;Hulsman et al., 2011). An acute increase in GC concentrations can be adaptive in nature, increasing energy availability and altering behavior, while indirectly regulating cardiovascular and metabolic parameters (Romero, 2002;Sapolsky, 2002;Reeder & Kramer, 2005;Walker, 2007). However, prolonged elevation of GC concentrations can lead to a number of deleterious effects, such as the suppression of the immune and reproductive systems, muscle atrophy, growth suppression, and a shortened life span (Möstl & Palme, 2002;Sapolsky, 2002;Charmandari, Tsigos & Chrousos, 2005;Cohen, Janicki-Deverts & Miller, 2007). Thus, monitoring GC concentration in endangered and threatened species can be an important tool for assessing physiological stress in individuals exposed to natural and anthropogenic stressors. Non-invasive hormone monitoring techniques, through the collection of urine or faeces, hold numerous advantages over the traditional use of blood collection. Firstly, there is no need to capture or restrain study animals for sample collection, thus removing any potential stress-related feedback, and thereby also increases safety for both animal subjects and researchers (Romero & Reed, 2005). Further, as a result of the general ease of collection, longitudinal sampling and hormone monitoring of specific individuals are possible (Heistermann, 2010). Finally, hormone metabolite concentrations determined from matrices like faeces, urine, or hair are usually less affected by episodic fluctuations of hormone secretion, as circulating hormone concentrations are accumulating in these matrices over a certain period of time (Vining et al., 1983;Creel, MarushaCreel & Monfort, 1996;Russell et al., 2012). However, prior to the first use of the chosen assays and specific matrices for monitoring physiological stress in a species, it is important that the approach is carefully validated to ensure a reliable quantification of respective GCs (Touma & Palme, 2005). A preferred method of validation is the physiological activation of the HPA axis through the injection of adrenocorticotropic hormone (''ACTH challenge '', (Touma & Palme, 2005), which results in a distinct increase in GC production from the adrenal gland. Collected pre-and post-injection samples are subsequently analyzed to determine which of the tested enzyme immunoassays (EIA) reflects the induced increase in GC concentrations best. Historically, GC patterns in reptiles have been monitored via serum analysis, for example in the red-eared slider turtle (Trachemys scripta elegans, Cash, Holberton & Knight, 1997), the Galapagos marine iguana (Amblyrhynchus cristatus, Romero & Wikelski, 2001), or the tuatara (Sphenodon punctatus, Tyrrell & Cree, 1998). However, some more recent studies attempting to understand the physiological response inherent in reptiles to environmental stressors have already opted for non-invasive hormone monitoring in reptiles, e.g., in Nile crocodiles (Crocodylus niloticus, Ganswindt et al., 2014), the three-toed box turtle (Terrapene carolina triunguis, Rittenhouse et al., 2005), the green anole (Anolis carolinensis, Borgmans et al., 2018) or the green iguana (Iguana iguana, Kalliokoski et al., 2012). The Sungazer (Smaug giganteus, formerly Cordylus giganteus; Fig. S1) is a cordylid lizard endemic to the grassland of the Free State and Mpumalanga provinces of South Africa (De Waal, 1978;Jacobsen, 1989). It is unique among the cordylid lizards as an obligate burrower rather than rupicolous (Tonini et al., 2016;Parusnath et al., 2017). The species is currently facing large scale habitat degradation and population declines as a result of anthropogenic activities such as agricultural repurposing of its natural habitat, road construction, electricity infrastructure, mining developments, as well as the pet and traditional medicine trade (Van Wyk, 1992;McIntyre & Whiting, 2012;Mouton, 2014). Consequentially, S. giganteus is now listed as vulnerable by the International Union for the Conservation of Nature (IUCN, Alexander et al., 2018). The aim of the study was to examine the suitability of four enzyme immunoassays (EIA) namely, 11-oxoaetiocholanolone, 5α-pregnane-3β-11β-21-triol-20-one, tetrahydrocorticosterone, and corticosterone, for monitoring adrenocortical function in S. giganteus by determining the stress-related physiological response in faeces and urine following an ACTH challenge test. Study site and animals The study was conducted at the SANBI National Zoological Garden (NZG), Pretoria, South Africa (25.73913 • N, 28.18918 • E) from the 24th of November 2017 to the 5th of December 2017. The study animals, consisting of one male (M1:291 g) and two females (F1: 295 g) and F2: 344 g), were housed in individual enclosures within the Reptile and Amphibian Section of the NZG. Individuals were separated by a 1.5 m high wall, which resulted in study animals not being in visual contact with one another. Each enclosure (2 m × 1.5 m) was covered in coarse river sand and included an artificial burrow constructed from fiberglass, UV-light and a water bowl with water available ad libitum. The light regime (13 L: 11 D) and humidity (range: 44-50%) were kept constant throughout the study period. A combination of meal worms and fresh, green vegetables were provided daily to all individuals. Prior to the start of the study, all individuals were given a two-week acclimatization period to the new enclosure and presence of researchers. The limited number of individuals used during the study reflects the availability of study animals in a suitable setting, as well as the difficulty in receiving approval to conduct research on vulnerable and endangered species. Sample collection and ACTH challenge In reptiles, urine and faeces can be excreted in unison, though not mixed ( Fig. S2; Singer, 2003); urine is a white, solid substance, compared to the dark, solid faecal component, which allows for the separation of the two matrices with limited levels of cross-contamination (Kummrow et al., 2011). During the entire monitoring period, collected urine and faeces were separated during collection, and the two parts placed into separate 1.5 ml microcentrifuge tubes, sealed, and immediately stored at −20 • C until further processing. Following a two-week acclimatisation period, enclosures were checked for urine and faecal samples during the active period of S. giganteus (6 am-6 pm), for seven days. Cages were checked hourly to limit the effect of bacterial and environmental degradation of urine and faecal samples. In the morning hours of the eighth day all three individuals were injected intramuscularly with 0.45 µg synthetic ACTH g −1 bodyweight (SynACTH R , Novartis, South Africa Pty Ltd) in a 100 µl saline transport. This ACTH dose was chosen as it has been used successfully by a number of studies conducted on amphibian species such as the Fijian ground frog (Platymantis vitiana, Narayan et al., 2010), tree frog (Hypsiboas faber, Barsotti et al., 2017) and the American bullfrog (Rana catesbeiana, Hammond et al., 2018) to evoke a stress response. Subsequently, the individuals were released back into their individual enclosures, with faecal and urine collection continuing until day 15 of the study. The study was performed with the approval of the National Zoological Garden's Animal Use and Care Committee (Reference: P16/22). Steroid extraction in urine and faecal samples Urine and faecal samples were lyophilized, pulverized and sifted through a mesh strainer to remove any undigested material, resulting in a fine faecal and urine powder (Heistermann, Tari & Hodges, 1993). Subsequently, 0.050-0.055 g of the respective urine and faecal powder was extracted with 1.5 ml 80% ethanol in water. After vortexing for 15 min, the suspensions were centrifuged for 10 min at 1,600 g and the resulting supernatants transferred into new microcentrifuge tubes and stored at −20 • C until analysis. Enzyme immunoassay analyses Depending on the original matrix, steroid extracts were measured for immunoreactive faecal glucocorticoid metabolite (fGCM) or urinary glucocorticoid metabolite (uGCM) concentrations using four different EIAs: (i) an 11-oxoaetiocholanalone (detecting fGCMs with a 5β-3α-ol-11-one structure), (ii) a 5α-pregnane-3β-11β-21-triol-20-one (measuring 3β-11β-diol-CM), (iii) a tetrahydrocorticosterone, and (iv) corticosterone EIA. Details about assay characteristics, including full descriptions of the assay components including cross-reactivities, can be found in Möstl et al. (2002) for the 11oxoaetiocholanalone EIA, Touma et al. (2003) for the 5α-pregnane-3β-11β-21-triol-20one, Palme & Möstl (1997) for the corticosterone EIA and in Quillfeldt & Möstl (2003) for the tetrahydrocorticosterone EIA. Assay sensitivities, which indicates the minimum amount of respective immunoreactive hormone that can be detected at 90% binding, as well as the intra-and inter-assay coefficients of variation of high and low quality controls for each EIA is shown in Table 1. Serial dilutions of extracted faecal and urine samples gave Table 1 The enzyme immunoassay specific parameters used during this study. The sensitivity as well as the intra-and inter-assay coefficient of variation (CV) of the four enzyme immunoassays used during the study. Enzyme immunoassay Sensitivity (ng/g dry weight) Intra-assay CV Inter-assay CV displacement curves that were parallel to the respective standard curves in the two assays of choice (5α-pregnane-3β-11β-21-triol-20-one and tetrahydrocorticosterone EIAs), with a relative variation in slope of <4%. All EIAs were performed at the Endocrine Research Laboratory, University of Pretoria, South Africa, as described previously (Ganswindt et al., 2002). Data analysis A total of six faecal and urine samples were analyzed for each individual. Individual median fGCM and uGCM concentrations from pre-injection samples were calculated, reflecting individual baseline concentrations. To determine the effect of the ACTH injection on the HPA axis, the fGCM and uGCM concentrations from post-injection samples were converted to percentage response, by calculating the quotient of individual baseline and related fGCM/uGCM samples. In this regard, a 100% (1-fold) response represents the baseline value. Furthermore, the mean absolute deviation (MAD) was calculated for the baseline sample set (pre-injection). The MAD of the particular dataset shows the average distance between each baseline period data point and the calculated mean thereof, which represents the variability of the baseline samples collected. Thus, the lower a MAD value is for a specific EIA, the more stable the assay. Here, the individual baseline uGCM/fGCM concentration was subtracted from all pre-injection fGCM/uGCM values for each EIA-specific data set. The differences were noted as absolute values and the mean of the absolute values calculated, representing the MAD value for each EIA. The MAD values were converted to a percentage deviation value (MAD/Baseline Value*100) to allow for the comparison between EIAs. To determine the effect of the ACTH injection, the absolute change in fGCM and uGCM concentration was determined by calculating the quotient of baseline and post-injection peak fGCM and uGCM samples. MAD values below 15% were regarded as preferable. The most appropriate EIA for measuring fGCM and uGCM concentrations in the species was chosen by comparing (1) the highest post-injection signal and (2) lowest MAD values observed. Values are given as mean ± standard deviation (SD) where applicable. Analytical statistics and graphical designs were performed using R software (R 3.2.1; R Development Core Team, 2013). Table 2 Urinary and faecal excretion rate, along with the time to peak urinary and faecal glucocorticoid metabolite peaks. The average faecal and urine excretion rate for female and male individuals of the study. Time to peak fGCM and uGCM response, as well as the respective sample numbers, are shown for each study animal. Values are given as mean ± standard deviation. Defecation rate and MAD results The average defecation rate (time between defecation events) showed considerable variation between individuals and matrices ( Table 2). The percentage MAD values were considerably lower in all EIAs when analyzing faecal (range: 3.17-15.67%) compared to urine (range: 13.31-56.52%) samples. For faeces, although the corticosterone EIA showed the lowest average percentage MAD value (mean ± SD = 8.37 ± 5.54%), the remaining three EIAs all showed comparable low average MAD levels (range: 9.95-11.02%). In contrast, all four EIAs showed high average percentage MAD levels in urine, with the 11-oxoaetiocholanalone EIA having the lowest average MAD value (mean ± SD = 17.17 ± 5.77%). Urinary glucocorticoid metabolites analysis Similar to the fGCM findings, all four EIAs showed a considerable response in uGCM concentrations (149.53%-651.82%) following the ACTH injection (Table 3). For the two females, the 5α-pregnane-3β-11β-21-triol-20-one and tetrahydrocorticosterone EIA showed the highest response, exceeding 340%, in the first collected faecal sample 27 h post ACTH administration (Tables 2 and 3, Fig. 2A & Fig. 2B). Respective uGCM Table 3 The urinary and faecal glucocorticoid metabolite response following ACTH administration in Smaug giganteus. The peak percentage glucocorticoid response in both faeces and urine, across all four enzyme immunoassays tested, in the two female and one male individual following the adrenocorticotropic hormone challenge. Values are given as mean ± SD. DISCUSSION In the current study, the defecation rate of study animals were prolonged and varied substantially within and between individuals. Extended defecation rates have been observed in a number of reptile species such as the Italian wall lizard (Podarcis sicula, ∼50 h, (Vervust et al., 2010), veiled chameleon (Chamaeleo calyptratus, ∼96 h, (Kummrow et al., 2011), six striped runner (Cnedmidophurs sexlineatus, 23-26 h, (Hatch & Afik, 1999) and a variety of snake species (45-3,180 h, (Lillywhite, De Delva & Noonan, 2002). Additionally, these studies have also shown high levels of individual variability in terms of gut retention times; for example, Kummrow et al. (2011) observed an individual excretion rate in C. calyptratus ranging from 48-120 h, while Hatch & Afik (1999) found the excretion rate in C. sexlineatus to range from 20-72 h. Understanding species-specific differences and individual variability in faecal and urinary defecation rates are important for a number of reasons. Firstly, the infrequent and extended excretion rate of urinary and faecal material in reptiles may complicate data interpretations (Ganswindt et al., 2014). Furthermore, the movement of urine into the cloaca (urodeum) before moving into the intestines, where urinary and faecal material can be excreted in unison (Singer, 2003), can further complicate the distinction between matrix-specific retention time and steroid hormone metabolite excretion routes. As it is difficult to collect frequent faecal and urine samples consistently in S. giganteus and other reptile species, it may be advisable to monitor GC metabolite patterns over a longer The MAD values for the four EIAs used in the fGCM analysis indicated low levels of variation from the predetermined baseline values. In contrast to this, the MAD values calculated for the four uGCM EIAs showed high levels of variation from calculated baseline levels. As such, GC metabolite excretion via faeces may be less prone to regular fluctuation than urine, although further research is required to confirm this. Following ACTH injection, the peak fGCM response was observed in the first faecal sample collected from all study animals. Ganswindt et al. (2014) found peak fGCM concentration, following the ACTH injection, in the first collected faecal sample from C. niloticus. Similarly, Cikanek et al. (2014) stressor. The pooling of faecal material in the reptile gut, over an extended period of time, may explain why peak fGCM responses are observed in the first sample post-injection in reptiles and other infrequent defecators. However, the available literature on reptile fGCM monitoring is limited, with a number of studies failing to highlight when the peak fGCM levels were observed or choosing to pool samples into larger time periods (Rittenhouse et al., 2005). Although all four EIAs displayed considerable peak fGCM responses for both sexes, the tetrahydrocorticosterone and 5α-pregnane-3β-11β-21-triol-20-one EIA performed best in our study, based on (i) EIA stability as seen in the low MAD values and (ii) the magnitude of peak percentage fGCM response following the ACTH injection. As such, both EIAs seem to be suitable for monitoring alterations in fGCM concentration in S. giganteus faecal material. The peak uGCM concentrations following ACTH administrations were observed in the first and third collected urine sample for the females and male respectively. To our knowledge this is the first study to quantify the uGCM response following the activation of the HPA axis through physiological or biological stressors. In reptiles, the movement of urine into the intestine, and the resulting pooling effect along with faeces over time, may explain why peak uGCM responses were observed within the first collected samples for females and third sample for males. Similar to the fGCM analysis, all four EIAs used during the study were able to monitor alterations in uGCM concentrations following the ACTH administration; the tetrahydrocorticosterone and 5α-pregnane-3β-11β-21-triol-20-one EIAs again showed the highest uGCM response in this regard. With all uGCM MAD values considerably higher than observed for the fGCM analysis, the peak uGCM response values were used to determine EIA suitability; in this regard, both the tetrahydrocorticosterone and 5α-pregnane-3β-11β-21-triol-20-one EIAs were deemed suitable for monitoring alterations in uGCM concentration in S. giganteus urine. Conclusion The ability to monitor physiological stress patterns in endangered reptile species, through non-invasive hormone monitoring techniques, offers conservationists an ideal tool which can be implemented within both free-ranging and captive setups with limited effort. With the increase in human-driven factors leading to substantial decreases in reptile populations, the need for such techniques are becoming more important. This study has successfully validated such a technique for monitoring the stress response in S. giganteus in both urine and faeces by using the 5α-pregnane-3β-11β-21-triol-20-one or tetrahydrocorticosterone EIA. Both assays showed low MAD values as well as a considerable response in fGCM and uGCM concentrations following ACTH injection. As such, both sample matrices can be used to monitor physiological stress in S. giganteus. Despite the results of this study, a number of uncertainties need to be addressed by researcher conducting further studies on the topic. Of greatest concern is the observed gut passage time and time to peak fGCM and uGCM concentrations between individuals. Although the time to peak fGCM (24 h) and uGCM (27 h) responses were similar in both females, the monitored male showed a prolonged gut passage time with peak fGCM and uGCM concentrations 81 h and 70 h later, respectively. However, if in fact differences in gut passage time or GC metabolite patterns between individuals or sexes of the species exist is yet to be determined by examining larger study populations. Currently, we recommend collecting only the faecal or urine component for GC metabolite monitoring in S. giganteus. Despite the limitations of this study the findings increased our understanding of stress hormone production, metabolism and excretion pattern in the species. We hope this will encourage and stimulate future research not only on this species, but reptiles in general, especially concerning the non-invasively examining the physiological stress response linked to a host of anthropogenic and natural factors.
v3-fos-license
2022-01-28T16:58:27.956Z
2022-01-21T00:00:00.000
246336010
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1420-3049/27/3/704/pdf", "pdf_hash": "1bf858fdd82c60b86ae6da884f31eefe6bef1416", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:13", "s2fieldsofstudy": [ "Medicine" ], "sha1": "2f6828dc9719104a798ed71ef65d0c88736d309c", "year": 2022 }
pes2o/s2orc
The System Profile of Renal Drug Transporters in Tubulointerstitial Fibrosis Model and Consequent Effect on Pharmacokinetics With the widespread clinical use of drug combinations, the incidence of drug–drug interactions (DDI) has significantly increased, accompanied by a variety of adverse reactions. Drug transporters play an important role in the development of DDI by affecting the elimination process of drugs in vivo, especially in the pathological state. Tubulointerstitial fibrosis (TIF) is an inevitable pathway in the progression of chronic kidney disease (CKD) to end-stage renal disease. Here, the dynamic expression changes of eleven drug transporters in TIF kidney have been systematically investigated. Among them, the mRNA expressions of Oat1, Oat2, Oct1, Oct2, Oatp4C1 and Mate1 were down-regulated, while Oat3, Mrp2, Mrp4, Mdr1-α, Bcrp were up-regulated. Pearson correlation analysis was used to analyze the correlation between transporters and Creatinine (Cr), OCT2 and MATE1 showed a strong negative correlation with Cr. In contrast, Mdr1-α exhibited a strong positive correlation with Cr. In addition, the pharmacokinetics of cimetidine, ganciclovir, and digoxin, which were the classical substrates for OCT2, MATE1 and P-glycoprotein (P-gp), respectively, have been studied. These results reveal that changes in serum creatinine can indicate changes in drug transporters in the kidney, and thus affect the pharmacokinetics of its substrates, providing useful information for clinical use. Introduction Drug combination is a joint therapeutic scheme for the treatment of clinical diseases. However, the incidence of drug-drug interactions (DDIs) is remarkably increasing, resulting in a variety of adverse reactions, even threatening human life [1]. Drug transporters are one of the main targets for DDIs. Kidney tissue, the main excretory organ in the body, shows the distribution of drug transporters. Many drugs (including organic anion drugs, organic cationic drugs, and peptide drugs) are mediated by drug transporters concentrated in proximal renal tubules during renal excretion [2]. Once the expression of drug transporters changes, it binds to affect the pharmacokinetics of drugs. Therefore, the Food and Drug Administration and National Medical Products Administration of China have pointed out that eleven drug transporters in the kidneys, including organic anion transporter 1 (OAT1), organic anion transporter 1 (OAT3), organic anion transporter polypeptide 4C1 (OATP4C1), organic cation transporter (OCT2), P-glycoprotein (P-gp), breast cancer resistance protein (BCRP), multi-drug and toxin extrusion protein 1 (MATE1), multi-drug and toxin extrusion protein 2-K (MATE2-K), organic anion transporter 4 (OAT4), multidrug resistance-associated protein 2 (MRP2) and multidrug resistance-associated protein 4 (MRP4), need to be researched for drug applications [3,4]. CKD is widespread in the world, affecting nearly 13% of the population, and CKD has become a global public health problem [5]. According to the online data of the Centers for Disease Control and Prevention, the number of CKD deaths increased by about 12% from 2011 to 2018, ranking ninth in the top ten fatal diseases [6]. Tubulointerstitial fibrosis (TIF) is a common pathological change in CKD progression to end-stage renal disease [7,8]. With kidney damage, CKD is often accompanied by hypertension, cardiovascular disease, diabetes, and other complications. Therefore, combination therapy is a frequent method for patients with CKD [9,10]. In the clinic, the drug administration in patients with CKD is very cautious. Creatinine (Cr) is an endogenous substance that was filtered out through the glomerular [11]. Creatinine clearance (Ccr) is commonly used to evaluate renal function [12,13]. When the drug is eliminated, primarily by glomerular filtration, the clinical administration schedule could be adjusted according to the patient's Cr/Ccr under pathological conditions. However, there have been no clear reports on the changes in drug transporters excreted by drug transporters in vivo, the changes in drug transporters expression in TIF, and the relationship between Cr/Ccr and drug transporters. Glomerular filtration rate (GFR) and proteinuria are still widely used diagnostic indicators, but these two indicators occur late in the disease. Therefore, it is urgently needed to explore the relationship between new indicators and transporters. Therefore, this study focuses on the relationship between kidney transporters and Cr/Ccr in unilateral urethral obstruction animal model, to provide useful data for the use of clinical drugs and drug combination. The Renal Parameters in TIF Rats To observe the dynamics of kidney tissue in TIF rats, the orbital blood and kidneys were harvested on the 4th, 7th, 10th, and 14th days after modeling in the model group. The renal structure was illustrated in Figure 1A. With the increase in modeling time, the right kidney of rats showed an obvious swelling, translucent epidermis, light color, cystic, containing brown turbid liquid. With the increase in modeling time, compared with the control group, the wet weight of the right kidney in the model group increased by approximately 1.51-3.05-fold, and reached the maximum value on the 14th day ( Figure 1B). With the increase in modeling time, the coefficient of right kidney of rats increased by about 1.017-2.507-fold compared with that of the left kidney ( Figure 1C). The measurement of Cr in serum revealed that Cr concentration increased 1.24-1.58 times with the increase in modeling time ( Figure 1D). On the contrary, with the increase in modeling time, Ccr decreased to 39.8-70.7% ( Figure 1E). In the anatomical model group on different days, the ligation kidney of rats was weighed, and the weight obtained was compared with that of the control group. (C) Renal index was calculated as the ratio of the weight of the left kidney and the ligation kidney to body weight. Data were expressed as mean ± SD. L: left kidney, R: ligated kidney. (D) Changes in Cr in serum concentration of rats at different modeling time, compared with the control group. (E) Changes in Ccr in serum concentration of rats at different modeling times, compared with the control group. **** p < 0.0001 *** p < 0.001, ** p < 0.01, * p < 0.05, ns p > 0.05. Histopathological Findings The H&E staining and Masson staining were employed to examine the pathological morphology of the tissues as well as fibrocyte collagen precipitation, respectively, as shown in Figure 2A. The H&E results showed that the kidneys in TIF rats exhibited glomerular fibrosis with cystic changes, glomerular enlargement with massive inflammatory cell infiltration, and widening of the renal interstitial space, with the increase in modeling time. They were then scored for pathological damage (Figure 2A), which showed that both kidney injuries increased as modeling time increased. Masson's results indicated that renal tubular dilatation, widening of the renal interstitial space, and an obvious increase in collagen fibers in the renal interstitium were seen in the obstructed side kidneys of the model group compared with the control group ( Figure 2B). As the modeling time increased, the fibrosis area increased by 12%, 19%, 35%, and 38% ( Figure 2B). Histopathological results. Sections of the right kidney of on rats at 4th, 7th, 10th, and 14th days were taken to H&E staining (A). Scale: 600 µm (200×). Histopathological changes in kidney sections were scored as a semi-quantitative percentage of damaged area: 0, normal; 1, cortical area <25%; 2, cortical area 25-50%; 3, the cortical area is 50-75%; 4, cortical area >75%, compared with control group. (B) Sections of the right kidney of on rats at 4th, 7th, 10th and 14th days were taken for Masson staining. Fibrosis area was quantified by Image J Pro Plus 6.0 compared with the control group. **** p < 0.0001 *** p < 0.001, ** p < 0.01. The Correlation of Renal Transporters and Cr/Ccr in the Pathological State of Renal Fibers Pearson correlation analysis was utilized to explore the relationship between renal transporter variation and Cr/Ccr under pathological conditions. The analysis result explained that Oct2 and Mate1 were highly negatively correlated with Cr (Pearson coefficient r >0.6, p ≤ 0.05), Among them, Oct2 (r = 0.624, p = 0.000061), Mate1 (r = 0.636, p = 0.0005), Oat2 (r = 0.414, p = 0.013) were moderately related, and the rest were all less than 0.3. Mdr1-α was positively correlated with correlation coefficients lower than 0.5 ( Figure 4A), such as Bcrp (r = 0.49, p = 0.012), which showed a medium relationship, but all others were less than 0.3 without a significant difference. The correlation between Ccr and transporters further confirmed these results. Oct2 (r = 0.601, p = 0.0011) Mate1 (r = 0.434, p = 0.0266), Mdr1-α (r = 0.440, p = 0.0244) ( Figure 4B). In brief, the above results showed that renal transporters were related to Cr and Ccr, and Oct2, Mate1 and Mdr1-α were strongly correlated. Real-time q-PCR analysis showed that the mRNA content of these transporters was 2 −∆∆Ct relative to the mRNA β-actin expression, and Pearson correlation was used to analyze the dynamic changes between the main kidney transporters and Cr. A correlation analysis of the relative size of 2 −∆∆Ct between the changed transporter and β-actin and Ccr was conducted. (B) The mRNA expression of transporter was detected on the 4th, 7th, 10th, and 14th days, and then the correlation between the expression value of transporter and the Ccr rate was analyzed. The Correlation of Renal Transporters, Cr and Renal Fibers in the Pathological State of Renal Fibers Further, we conducted a correlation analysis of the dynamic change in Cr and the degree of fibrosis, and the results exhibited that the degree of fibrosis was significantly positively correlated with the dynamic change of Cr (r = 0.736, p ≤ 0.05) ( Figure 5A). We also detected a relationship between transporters and the degree of renal fibrosis. This showed that Oct2 (r = 0.751, p = 0.0001), Mate1 (r = 0.744, p = 0.0002), Mdr1-α (r = 0.597, p = 0.0055) were highly correlated with fibrosis, which were consistent with that of Cr/Ccr ( Figure 5B). PK of Renal OCT2, MATE1, P-gp Substrates in the TIF Rats Oct2, Mate1 and Mdr1-α regulate the expression of OCT2, MATE1 and P-gp proteins in vivo. To determine the influence of changes in transporters on pharmacokinetic parameters under pathological conditions, three typical substrates for OCT2, MATE1 and P-gp were selected for pharmacokinetic studies. A methodological verification of the three drugs was conducted (Table 1, Table S1, Figure 6A-C) and the detection method met the methodological requirements. The results exhibited that the AUC of cimetidine (substrate of OCT2) in the model group increased 1.49 times compared with the control group ( Figure 6D). The value of renal clearance (Cl r ) in the model group decreased by 20.5%, which may be linked to the decreased expression of OCT2 protein in the kidney. Digoxin was a typical substrate of P-gp. Its AUC value reduced by 3.138-fold, while Cl r value increased by 2.6-fold, which might be related to the increased expression of P-gp in TIF rats. Ganciclovir is a substrate of MATE1. The AUC of ganciclovir decreased by 11.3%, while Cl r did not significantly change. The pharmacokinetics parameters did not significantly change when ganciclovir was combined with some MATE1 inhibitors or substrates, which may be related to other excretory pathways in vivo. Discussion The extensive literature suggests that the expression of kidney transporters in a pathological state will change, for example, under the rat liver ischemia-reperfusion model [14]. This will lead to the up-regulation of MRP and the down-regulation of OCT2, while, for hyperuricemia rats, in acute kidney injury, P-gp, MRP2 and other transporters will be significantly upregulated [15,16]. These changes may be due to the activation or induction of some upstream nuclear receptors under pathological conditions, such as PPAR-α and other nuclear receptors and transcription factors, thereby regulating the expression of downstream transporters [17,18], LXR and FXR are associated with Abcg1 gene and Abc-related protein expression, and its expression can cause changes in downstream transporters. PXR is associated with Slc-related protein expression and Abc-related protein expression, just like Mdr-1α and Slc16a1 [19,20]. Therefore, changes in the body or under certain inflammatory or pathological conditions may cause changes in the expression of some nuclear receptors in the pathway, thus leading to changes in the expression of other transporters [21]. In addition to the nuclear receptors referred to above, some inflammatory factors can also directly affect the expression of transporters. For example, TNF-α can inhibit the transcription of the tubule bile acid transporter Abcb11, bilirubin outlet Abcc2, and sterol transporter Abcg5/8 in intestinal inflammation, cholestasis, or the activation of hepatic macrophages, and thus affect the expression of transporters [22]. Therefore, the present study constructed a classical renal interstitial fibrosis model to explore the changes in renal transporter expression in rats under the TIF model. Since Cr and Ccr are commonly used indicators to evaluate renal function, this experiment wanted to explore the change rule of Cr and Ccr and the expression of various transporters under the renal interstitial fibrosis model, and whether the expression changes in major transporters in kidney could be inferred through the detection of Cr and Ccr. Therefore, in this paper, the dynamic changes in transport proteins under the TIF model were related to Cr and Ccr by correlation analysis, and transport proteins were found that were highly correlated with Ccr. Transporter inhibitors are compounds that competitively bind or inhibit transporter activity [23,24]. Therefore, in the case of multi-disease combination, there will be interactions between drugs, such as P-gp [25,26], which has a variety of inducers in vivo, including antibacterial drug rifampicin, anti-tumor drug vincristine, doxorubicin, cardiovascular drug verapamil [27], hyperlipidemia drug atorvastatin [28], etc., which can induce the overexpression of P-gp in vivo. As a result, the pharmacokinetics parameters of drugs such as digoxin in vivo are significantly changed, while digoxin has a narrow treatment window, and the blood concentration of digoxin will be greatly reduced in a multi-drug combination, so that digoxin cannot play a therapeutic role. In many studies, the combination of naproxen and other agents with a typical OCT2 substrate (cimetidine) increased the plasma concentration of cimetidine, thereby separately increasing the toxicity of cimetidine [29]. Ganciclovir [30], its pharmacokinetic behavior in some studies [31,32], and part of its MATE1 inhibitors or substrate share, showed no significant change in pharmacokinetics parameters, which may be related to other excretory pathways in the body [33]. In addition, this paper also examines the ligation of the bilateral renal compensatory, where the transporter will affect the elimination of the substrate, and the results showed that the left kidney transporter expression showed no obvious change. We also considered the effect of absorption of drug excretion, in view of the selected sev-eral drugs in the clinic, which are mainly for oral use, and choosing the means of lavage for pharmacokinetics validation. In addition, the specific transporters OCT2, Mate1 and MDR1-α showed a high correlation with renal fibrosis. Therefore, we can deduce the renal fibrosis process from indicators such as blood creatinine/creatinine clearance. Further research will continue to focus on this aspect and deeply explore the mechanism of renal transporter expression changes under pathological conditions. The study has several advantages and limitations. The advantages include the simplicity pf Cr in serum and Ccr, which can introduce a change in the transporter, as well as the transporter excretion of drug medication guides, without the need for a kidney biopsy. The first limitation is the change in renal fiber and Ccr constant transporter. It is unknown whether his drugs change, as their pharmacokinetics parameters were not studied. Second, the study used TIF rats in the 14th-day group, without considering the changes in pharmacokinetics parameters in other groups. In this study, only rats were used for transport experience, so this was not verified in clinical patients. Therefore, the results of this study may not be applicable to the whole population. As the next step, we will continue to supplement pharmacokinetics experiments to study whether the pharmacokinetics of substrates of several other transporters with a low correlation with Ccr will change, and verify this using in vitro experiments. Chemicals and Regents Chloral hydrate (≥99% in purity) was provided by Guangzhou Youbang Biotechnology Co., Ltd. (Guangzhou, China). Cimetidine (≥99% in purity) and irbesartan (internal standard, >98% in purity) ware purchased from Shenzhen upno Biomedical Technology Co., Ltd. (Shenzhen, China). The kit for analysis of blood urea nitrogen (BUN) and Cr was purchased from Nanjing Jiancheng Bioengineering Institute (Nanjing, China). The animal total RNA isolation kit was provided by Foregene Co., Ltd. (Chengdu, China). All other chemical reagents were of chromatographic or analytical grade and were commercially available. Animals Healthy male Sprague-Dawley rats (SD rats, aged 7-8 weeks, weight 180-220 g, certification: SCXK-Yue-2016-0041) in specific pathogen-free grade were available from the Experimental Animal Center of Southern Medical University (Guangzhou, China). All experiments followed the National Institutes of Health Guidelines for the Care and Use of Laboratory Animals. All animals were housed in an air-conditioned room with the temperature at 23 ± 2 • C and a relative humidity of 40 ± 5%, under an alternating 12 h dark/light cycle. Animals had free access to food and water throughout the experiment. Animal Experiment Thirty SD rats were randomly divided into five groups (n = 6): control and TIF model groups analyzed on the 4th, 7th, 10th and 14th days. The rats in the model group were intraperitoneally anesthetized with 10% chloral hydrate at a dose of 0.3 mL/100 g [34]. Surgery was carried out as previously described. In the model group, the right ureter was exposed and ligated at two points with a 5-0 sterile suture along the lower pole of the right kidney. Then, the ureter was cut to prevent retrograde infection [35]. Two hours later, they were intraperitoneally injected with penicillin (1.6 million units dissolved in 8 mL of normal saline) for two consecutive days. Each rat was subcutaneously injected with 0.30 mL. On the 4th, 7th, 10th and 14th days after operation, blood was taken from the orbit of TIF rats. Urine was taken from the TIF rats placed in the metabolic cage for 24h. Heart, liver, spleen, lung, kidney, intestine and other tissues were dissected. The kidney tissues were weighed and the ratio of kidney weight to body weight was estimated. The right kidney was longitudinally reduced and fixed with paraformaldehyde. The rest was used only for a real-time quantitative polymerase chain reaction (RT-qPCR). Histology Analysis The kidney tissue was longitudinally cut, rinsed several times with cold PBS, and fixed overnight with 4% paraformaldehyde. Then, five pieces were cut out after paraffin embedding of the µM section. The renal tissue was stained with H & E at low power (10 × 10) The observation site was determined under a high-power microscope (10 × 20 and 10 × 40) and the target field of vision was selected to take 1-2 pictures. In H & E staining, the degree of renal injury was determined according to the size of glomeruli and the changes in renal tubules [36]. Masson staining: Image Pro Plus 6.0 software was used for quantitative analysis. The degree of renal interstitial fibrosis was evaluated based on the amount of collagen deposition (the percentage of the blue area in the whole cortex). Five different cortical fields were randomly selected from each slice (magnification 200 times). The area of fibrotic lesions was expressed as the percentage of fibrotic area in the whole cortex [37,38]. Detection of mRNA Expression: RT-qPCR A total of 10-20 mg renal tissue samples were collected into the homogenization tube. Total RNA was extracted according to the protocol of animal tissue total RNA Extraction Kit (Foregene, Chengdu, China). A total of 1000 ng of total RNA was reverse-transcribed into cDNA using Evo m-mlv reverse transcription reagent (Accurate, Shenzhen, China). All subsequent RT-qPCR reactions were performed using 2 × Accurattaq Master Mix (Accurate, China), primer (designed and synthesized by Guangzhou Branch of Beijing Qingke Biotechnology Co., Ltd., Guangzhou, China, Table S2), ddH2O without ribonuclease, reaction volume 20 µL. The PCR was conducted on a rapid real-time PCR system (7500, Thermo Fisher Science, Waltham, MA, USA). At 50 • C The results were analysed under the conditions of C reaction for 3 min, 95 • C reaction for 3 min, 95 • C reaction for 10 s, and 60 • C reaction for 30 sec. The threshold period (CT) was recorded with 7500 fast system software version 2.3, and the multiple changes in mRNA expression were calculated according to the comparative CT method. Pharmacokinetic Analysis Thirty rats were split into six groups (n = 5). The rat model of TIF was established by unilateral ureteral obstruction surgical operation under sterile conditions according to previous research. On the 14th day after establishment of the model group, cimetidine (18 mg/kg), ganciclovir (45 mg/kg), and digoxin (5 mg/kg) were given orally in each group, respectively [39]. Blood samples were collected from the retroorbital sinus at the 0 min, 5 min, 15 min, 30 min, 45 min, 90 min, 120 min, 240 min, 360 min, 480 min, 720 min and 1440 min timepoints, and centrifuged immediately after collection (5000 rpm, 8 min), The obtained plasma was stored at −80 • C before any pharmacokinetics analysis. The plasma concentration of cimetidine in SD rats was established by high-performance liquid chromatography tandem mass spectrometry (UPLC-MS/MS). In a positive ion mode, an API 4000 triple quadrupole tandem mass spectrometer (SCIEX, Framingham, MA, USA) with an ESI (AB SCIEX, Framingham, MA, USA) source was used, and the acquisition and analysis of data were carried out with Analyst 1.6.2 software (Applied Biosystems, Foster City, CA, USA). Multiple reaction monitoring (MRM) parameters for the ganciclovir, cimetidine, digoxin and ebesartan (internal standard, IS) were optimized and are summarized in Table S3. The other ionization parameters were as follows: curtain gas, 20 psi; collision gas, 6 psi; ion source gas 1, 50 psi; ion source gas 2, 50 psi, respectively, with a temperature of 500 • C and an ion spray needle voltage of 5500 V The bioanalytical method validation guidance for industry released by the FDA in 2018 was used to validate the analytical approach used in this study. The selectivity, specificity, accuracy, matrix effects, stability, served as key metrics to affirm the validity of this method [40]. Statistical Analyses The experimental data were analyzed by Graphpad prism software (San Diego, CA, USA), and the mean value was calculated ± standard deviation (SD). The differences between groups (p < 0.05 and p < 0.01) were analyzed by SPSS 20.0. Dunnett multiple comparison test or LSD test were used for multiple comparison, p < 0.05 was regarded as statistically significant. The results of LC-MS/MS were analyzed by Das 2.0 software. Ccr was calculated with the following formula. Conclusions In conclusion, this experiment explored the relationship between major kidney transporters and creatinine, creatinine clearance, and renal fibrosis area. The development of modeling time in the TIF pathological model of rats was studied to infer the relationship between creatinine, creatinine clearance and kidney transporters. The results showed that OCT2 and MATE1 were negatively correlated with creatinine and fibrosis area, and positively correlated with creatinine clearance, while P-gp showed the opposite results. Therefore, we think that Cr/Ccr can be used to infer the transporter expression and renal fibrosis process. Using typical substrates for pharmacokinetic studies, the research results show that, with OCT2 lower expression, substrate cimetidine pharmacokinetic parameters show obvious changes in the body, with a notable rise in AUC and Cmax, while Clr was significantly down-regulated, suggesting that cimetidine excretion was significantly slowed in the TIF model. MATE1 and P-gp substrates showed the opposite results. Therefore, we believe that Cr/Ccr can be used as an indicator of OCT2, MATE1 and P-gp transporter expression, and its changes are significantly correlated with OCT2, MATE1 and P-gp changes, providing data and references for clinical renal disease patients in clinical medication. : Table S1: Methodology of Ganciclovir, Cimetidine and Digoxin, Table S2: The primer sequences of target genes and β-actin, Table S3: Optimized MRM parameters for analytes and IS.
v3-fos-license
2022-11-20T16:07:28.202Z
2022-11-01T00:00:00.000
253696074
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1422-0067/23/22/14297/pdf?version=1668762465", "pdf_hash": "473f44f0451180ce890ce04941535dd5c2b10ff0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:14", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "7ad731214609b3ab5930190cab73ad1c5256a774", "year": 2022 }
pes2o/s2orc
Potent Activation of Human but Not Mouse TRPA1 by JT010 Transient receptor potential (TRP) ankyrin repeat 1 (TRPA1), which is involved in inflammatory pain sensation, is activated by endogenous factors, such as intracellular Zn2+ and hydrogen peroxide, and by irritant chemical compounds. The synthetic compound JT010 potently and selectively activates human TRPA1 (hTRPA1) among the TRPs. Therefore, JT010 is a useful tool for analyzing TRPA1 functions in biological systems. Here, we show that JT010 is a potent activator of hTRPA1, but not mouse TRPA1 (mTRPA1) in human embryonic kidney (HEK) cells expressing hTRPA1 and mTRPA1. Application of 0.3–100 nM of JT010 to HEK cells with hTRPA1 induced large Ca2+ responses. However, in HEK cells with mTRPA1, the response was small. In contrast, both TRPA1s were effectively activated by allyl isothiocyanate (AITC) at 10–100 μM. Similar selective activation of hTRPA1 by JT010 was observed in electrophysiological experiments. Additionally, JT010 activated TRPA1 in human fibroblast-like synoviocytes with inflammation, but not TRPA1 in mouse dorsal root ganglion cells. As cysteine at 621 (C621) of hTRPA1, a critical cysteine for interaction with JT010, is conserved in mTRPA1, we applied JT010 to HEK cells with mutations in mTRPA1, where the different residue of mTRPA1 with tyrosine at 60 (Y60), with histidine at 1023 (H1023), and with asparagine at 1027 (N1027) were substituted with cysteine in hTRPA1. However, these mutants showed low sensitivity to JT010. In contrast, the mutation of hTRPA1 at position 669 from phenylalanine to methionine (F669M), comprising methionine at 670 in mTRPA1 (M670), significantly reduced the response to JT010. Moreover, the double mutant at S669 and M670 of mTRPA1 to S669E and M670F, respectively, induced slight but substantial sensitivity to 30 and 100 nM JT010. Taken together, our findings demonstrate that JT010 potently and selectively activates hTRPA1 but not mTRPA1. The chemical compound JT010, 2-chloro-N-(4-(4-methoxyphenyl)thiazol-2-yl)-N-(3methoxypropyl-acetamide, developed by Takaya et al., potently and selectively activates TRPA1 [18]. The half maximum concentration required for TRPA1 activation was reported to be 0.65 nM when applied to human embryonic kidney (HEK) cells expressing human TRPA1 (hTRPA1). In contrast, JT010, even at 1 µM, did not activate TRP vanilloid family type 1, 3, and 4 (TRPV1, TRPV3, TRPV4), TRP melastatin family type 2 and 8 (TRPM2, TRPM8), and TRP canonical family type 5 (TRPC5), suggesting that JT010 is a useful TRPA1 activator as a potential pharmacological tool. Several TRPA1 cysteine residues are targeted by physiological and non-physiological electrophilic compounds that activate the channel [2,19]. Particularly, recent studies have revealed that as a two-step model, cysteine 621 (C621) is critical for channel activation by electrophilic compounds and cysteine 665 (C665) is supportive and important for activation [20,21]. Indeed, hTRPA1 activation by 9,10-PQ is dependent on both C621 and C665 at the N-terminus of the channel [16]. It has also been shown that the C621 mutation in hTRPA1 prevents channel activation by JT010 [18]. Moreover, phenylalanine 669 (F669) is critical for the binding of JT010 to hTRPA1 [22]. Meanwhile, it has been proposed that the attachment of a large electrophile, such as JT010, to C621 is sufficient for the full activation of the channel. Moreover, confirmation of the binding pocket is supported by the interaction between lysine 671 (K671) and the C terminus of the TRP helix of TRPA1 [21]. In contrast, non-electrophilic compounds such as ∆9-tetrahydrocannabinol, nicotine, and menthol activate TRPA1 via different mechanisms [1,23,24]. Therefore, pharmacological tools are important to understand the physiological functions of TRPA1, and extensive efforts have been made to develop highly selective TRPA1 agonists and antagonists. In this study, we showed that a potent TRPA1 agonist, JT010, at concentrations under 100 nM can activate hTRPA1 but not mTRPA1. A comparison of the JT010-induced response of wild-type and mutant TRPA1s in humans and mice in response to allyl isothiocyanate (AITC) revealed that the mutation of hTRPA1 at position 669 from phenylalanine to methionine (F669M) reduced the sensitivity to JT010, and the double mutation at S669 (serine) and M670 (methionine) of mTRPA1 to glutamine (S669E) and phenylalanine (M670F), respectively, induced weak but substantial sensitivity to JT010. Treatment with JT010 also effectively activated endogenous hTRPA1 in human fibroblast-like synoviocytes (FLSs) with inflammation, but not mTRPA1 in dorsal root ganglion (DRG) cells isolated from mice. Our findings provide novel and important pharmacological evidence that JT010 is a much weaker TRPA1 agonist in mice than in humans. Results To confirm the expression of wild-type and mutant TRPA1s in human embryonic kidney (HEK) cells, the TRPA1 channel function was tested by applying AITC at the end of each experiment, except during the application of high JT010 concentrations [25]. Additionally, we confirmed the expression of wild-type and mutant hTRPA1 by Western blotting at the protein level (Supplementary Figure S1A) but failed to find any specific antibody against mTRPA1 (three different antibodies used, not shown). Using HEK cells expressing wild-type hTRPA1 (HEK-hTRPA1), we examined the effects of JT010 on hTRPA1. As shown in Figure 1A, treating HEK-hTRPA1 cells with JT010 at concentrations ranging from 0.3 to 100 nM effectively induced a Ca 2+ response, whereas the treatment failed to evoke any Ca 2+ response in control HEK cells (Supplementary Figure S1B). The doseresponse relationship observed indicated that the half maximum concentration (EC 50 ) required for 50% response was approximately 10 nM ( Figure 1B), suggesting that JT010 is a potent TRPA1 agonist [18]. To confirm whether JT010 also induces the Ca 2+ response of mTRPA1, we applied JT010 to HEK cells expressing wild-type mTRPA1 (HEK-mTRPA1). Surprisingly, applying 10 nM JT010 induced a small Ca 2+ response of mTRPA1 compared with that in control HEK cells ( Figure 1D vs. Supplementary Figure S1B), whereas 100 µM AITC elicited a large response of mTRPA1 and hTRPA1 ( Figure 1C,D). Application of much higher JT010 (1000 nM) concentrations induced a moderate Ca 2+ response of mTRPA1 (Supplementary Figure S1C), suggesting that mTRPA1 is much less sensitive to JT010 than hTRPA1. . At the end of each experiment, 100 μM AITC was applied to confirm hTRPA1 expression. (C,D) JT010 and AITC at 10 nM and 100 μM, respectively, were applied to HEK-hTRPA1 and HEK-mTRPA1 cells, and the measured Ca 2+ response (C) and the peak JT010-and AITC-induced Ca 2+ response (five independent experiments each) (D) are summarized. Two-way analysis of variance (ANOVA): * p = 0.0279, F = 5.85 (species); ** p < 0.0001, F = 84.5 (drugs); * p = 0.0312, F = 5.58 (interaction). Vertical bars = SEM. To further examine whether JT010 potently activates hTRPA1 but not mTRPA1, we applied 10-100 nM JT010 to HEK-hTRPA1 and HEK-mTRPA1 cells in whole-cell recording mode ( Figure 2). To maintain TRPA1 channel activity during recording, we applied chemical agents in the absence of external Ca 2+ in the standard HEPES-buffered bathing solution (SBS) and the presence of internal Ca 2+ at 0.3 μM in a pipette solution [16]. As shown in Figure 2A-C, exposure of HEK-hTRPA1 cells to JT010 induced inward and outward currents at −90 and +90 mV, respectively, in a concentration-dependent manner. Moreover, a TRPA1 antagonist, A-967079 (A96), at 5 μM abolished JT010-induced currents, demonstrating that JT010 potently activated hTRPA1 channel currents. In contrast, the application of JT010 to HEK-mTRPA1 cells did not induce any clear mTRPA1 channel currents, while 30 μM AITC markedly induced currents at −90 and +90 mV, sensitive to A96 ( Figure 2D-F). Moreover, to exclude the possibility that relatively higher intracellular Ca 2+ (0.3 μM) modifies the effects of JT010 on mTRPA1, HEK cells were internally superfused with only 1 mM EGTA without Ca 2+ (Supplementary Figure S2). Even under these experimental conditions, JT010 did not induce mTRPA1 activation but effectively activated hTRPA1. These results suggest that JT010 is a potent hTRPA1 agonist, but not mTRPA1. . At the end of each experiment, 100 µM AITC was applied to confirm hTRPA1 expression. (C,D) JT010 and AITC at 10 nM and 100 µM, respectively, were applied to HEK-hTRPA1 and HEK-mTRPA1 cells, and the measured Ca 2+ response (C) and the peak JT010-and AITC-induced Ca 2+ response (five independent experiments each) (D) are summarized. Two-way analysis of variance (ANOVA): * p = 0.0279, F = 5.85 (species); ** p < 0.0001, F = 84.5 (drugs); * p = 0.0312, F = 5.58 (interaction). Vertical bars = SEM. To further examine whether JT010 potently activates hTRPA1 but not mTRPA1, we applied 10-100 nM JT010 to HEK-hTRPA1 and HEK-mTRPA1 cells in whole-cell recording mode ( Figure 2). To maintain TRPA1 channel activity during recording, we applied chemical agents in the absence of external Ca 2+ in the standard HEPES-buffered bathing solution (SBS) and the presence of internal Ca 2+ at 0.3 µM in a pipette solution [16]. As shown in Figure 2A-C, exposure of HEK-hTRPA1 cells to JT010 induced inward and outward currents at −90 and +90 mV, respectively, in a concentration-dependent manner. Moreover, a TRPA1 antagonist, A-967079 (A96), at 5 µM abolished JT010-induced currents, demonstrating that JT010 potently activated hTRPA1 channel currents. In contrast, the application of JT010 to HEK-mTRPA1 cells did not induce any clear mTRPA1 channel currents, while 30 µM AITC markedly induced currents at −90 and +90 mV, sensitive to A96 ( Figure 2D-F). Moreover, to exclude the possibility that relatively higher intracellular Ca 2+ (0.3 µM) modifies the effects of JT010 on mTRPA1, HEK cells were internally superfused with only 1 mM EGTA without Ca 2+ (Supplementary Figure S2). Even under these experimental conditions, JT010 did not induce mTRPA1 activation but effectively activated hTRPA1. These results suggest that JT010 is a potent hTRPA1 agonist, but not mTRPA1. Next, we examined the effects of JT010 on the TRPA1 channels expressed in human and mouse tissues. As inflammatory stimulation with interleukin-1α (IL-1α) transcriptionally induces TRPA1 expression in human FLSs [26,27], we applied JT010 and AITC to human FLSs with or without inflammation. As shown in Figure 3A, both 10 nM JT010 and 100 μM AITC induced substantial Ca 2+ responses in inflammatory FLSs treated with IL-1α for 24 h, but not in control FLSs (vehicle). FLSs sensitive to AITC largely responded to JT010 (74%, 37 cells out of 50 cells) and vice versa (0%, 0 cells out of 40 cells), suggesting that JT010 can activate endogenous hTRPA1 channels in human tissues. In contrast, mouse DRGs sensitive to 100 μM AITC (40 cells out of 45 cells) did not respond to 10 nM Next, we examined the effects of JT010 on the TRPA1 channels expressed in human and mouse tissues. As inflammatory stimulation with interleukin-1α (IL-1α) transcriptionally induces TRPA1 expression in human FLSs [26,27], we applied JT010 and AITC to human FLSs with or without inflammation. As shown in Figure 3A, both 10 nM JT010 and 100 µM AITC induced substantial Ca 2+ responses in inflammatory FLSs treated with IL-1α for 24 h, but not in control FLSs (vehicle). FLSs sensitive to AITC largely responded to JT010 (74%, 37 cells out of 50 cells) and vice versa (0%, 0 cells out of 40 cells), suggesting that JT010 can activate endogenous hTRPA1 channels in human tissues. In contrast, mouse DRGs sensitive to 100 µM AITC (40 cells out of 45 cells) did not respond to 10 nM JT010 (0 cells out of 40 cells), implying that JT010 at less than 10 nM could not activate endogenous mTRPA1 channels in mouse tissue ( Figure 3B). JT010 (0 cells out of 40 cells), implying that JT010 at less than 10 nM could not activate endogenous mTRPA1 channels in mouse tissue ( Figure 3B). As previously reported [18,22], the substitution of cysteine in hTRPA1 at 621 (C621, Figure 4A) with serine (C621S) markedly reduced the response to 10 nM JT010, but not 100 μM AITC ( Figure 4B-D, see also Figure 1B as the control), confirming that C621 is critical for JT010-induced TRPA1 response in humans. This C621 is conserved in mTRPA1 (C622, Figure 4A); therefore, we explored the insensitivity mechanism of mTRPA1 against JT010 using mutant mTRPA1s (Figures 4-6). First, we focused on three cysteines in hTRPA1 that are not conserved in mTRPA1 (Y60, H1023, and N1027, arrow in Figure 4A). To test the possible involvement of cysteines in the sensitivity to JT010, we mutated these residues to cysteine (Y60C, H1023C, and N1027C) and applied JT010 to each mutant. None of these mutants were sensitive to JT010 at 10 nM, whereas all responded to 100 μM AITC ( Figure 4B-D), indicating that these mTRPA1 residues may not determine the responsiveness to JT010. Effects of JT010 on endogenous TRPA1 in human and mouse cells. (A) JT010 at 10 nM and AITC at 100 µM were applied to human FLSs with or without inflammation. FLSs were treated with 10 U IL-1α or vehicle for 24 h and then exposed to JT010 and AITC, and Ca 2+ response was monitored ((A), each representative cell). The peak JT010-and AITC-induced Ca 2+ response (∆ratio) in FLSs with or without IL-1α (six independent experiments each) are summarized in the lower panel. Two-way ANOVA: ** p < 0.0001, F = 167 (pretreatments); ** p < 0.0001, F = 27.0 (drugs); ** p = 0.00015, F = 21.7 (interaction) (B) JT010 at 10 nM and AITC at 100 µM were applied to mouse DRGs, and Ca 2+ response was monitored ((B), a representative cell). The peak JT010-and AITC-induced Ca 2+ response (∆ratio) in DRGs are summarized (lower panel, five independent experiments). Paired Student's t-test: ## p = 0.00042. Vertical bars = SEM. As previously reported [18,22], the substitution of cysteine in hTRPA1 at 621 (C621, Figure 4A) with serine (C621S) markedly reduced the response to 10 nM JT010, but not 100 µM AITC ( Figure 4B-D, see also Figure 1B as the control), confirming that C621 is critical for JT010-induced TRPA1 response in humans. This C621 is conserved in mTRPA1 (C622, Figure 4A); therefore, we explored the insensitivity mechanism of mTRPA1 against JT010 using mutant mTRPA1s (Figures 4-6). First, we focused on three cysteines in hTRPA1 that are not conserved in mTRPA1 (Y60, H1023, and N1027, arrow in Figure 4A). To test the possible involvement of cysteines in the sensitivity to JT010, we mutated these residues to cysteine (Y60C, H1023C, and N1027C) and applied JT010 to each mutant. None of these mutants were sensitive to JT010 at 10 nM, whereas all responded to 100 µM AITC ( Figure 4B-D), indicating that these mTRPA1 residues may not determine the responsiveness to JT010. Comparison of JT010-induced TRPA1 response among mutants of N-and C-terminal cysteine residues of hTRPA1 and mTRPA1. (A) Alignment of amino acid sequence between hTRPA1 and mTRPA1. C621 and C665 in hTRPA1 (homologous to mTRPA1 C622 and C666) shown by boxes indicate critical cysteines for electrophilic TRPA1 agonist modification. Bold and underlined letters (C59, C1021, C1025 in human, indicated by an arrow) show cysteines substituted in mutant mTRPA1 (Y60C, H1023C, N1027C), whose effect was examined. Yellow color boxes indicate potential critical amino acids for JT010-sensitivity, whose importance is examined in Figures 5 and 6. (B-D) Ca 2+ responses of mutant hTRPA1 with C621S mutation and mutant mTRPA1s with Y60C, H1023C, and N1027C mutations to 10 nM JT010. To confirm the channel expression, 100 μM AITC was applied at the end of the experiment. Each representative Ca 2+ response was superimposed (B) and the peak JT010-and AITC-induced Ca 2+ response (Δratio) is summarized (C, five independent experiments). Paired Student's t-test: ** p < 0.0001, ** p = 0.00092, ** p < 0.0001, and ** p = 0.00064 for hC621S, mY60C, mH1023C, and mN1027C, respectively (D) Ca 2+ response to JT010 was normalized with that to AITC and is summarized. The responses of wild hTRPA1 and mTRPA1 were also included as a comparison (the same data set as Figure 1). Unpaired Student's t-test: ## p < 0.0001. The 'ns' shows no significance by the Tukey-Kramer test. Vertical bars = SEM. A recent structural analysis of hTRPA1 revealed that the phenylalanine residue at position 669 (F669 shown in yellow in Figure 4A) of hTRPA1 is critical for channel activation by JT010 [22]. Indeed, this phenylalanine is not conserved in mTRPA1 (M670 in mouse; Figure 4A). In addition, the glutamate residue at position 668 (E668 shown in yellow in Fig4A) of hTRPA1 is substituted with serine in mTRPA1 (S669, Figure 4A). Therefore, we examined the involvement of these amino acid residues with different sensitivities to JT010 between hTRPA1 and mTRPA1. As shown in Figure 5A-C, the F669M mutation in hTRPA1 significantly reduced the sensitivity to JT010, but not to AITC, suggesting that M670 in mTRPA1 renders a lower sensitivity to JT010. Particularly, the mutant hTRPA1 with F669M was insensitive to 10 nM JT010 ( Figure 5C). To further confirm the importance of F669, we applied JT010 to mTRPA1 with M670F mutation and double mutants with S669E and M670F mutations ( Figure 6A,B). As 30 μM AITC induced a similar size of TRPA1 currents at +90 and −90 mV, the response of these mutants to JT010 did not differ from that of wild mTRPA1 ( Figure 6C). Furthermore, the relative change in JT010induced TRPA1 currents was analyzed to normalize the current amplitude against the (Y60C, H1023C, N1027C), whose effect was examined. Yellow color boxes indicate potential critical amino acids for JT010-sensitivity, whose importance is examined in Figures 5 and 6. (B-D) Ca 2+ responses of mutant hTRPA1 with C621S mutation and mutant mTRPA1s with Y60C, H1023C, and N1027C mutations to 10 nM JT010. To confirm the channel expression, 100 µM AITC was applied at the end of the experiment. Each representative Ca 2+ response was superimposed (B) and the peak JT010-and AITC-induced Ca 2+ response (∆ratio) is summarized (C, five independent experiments). Paired Student's t-test: ** p < 0.0001, ** p = 0.00092, ** p < 0.0001, and ** p = 0.00064 for hC621S, mY60C, mH1023C, and mN1027C, respectively (D) Ca 2+ response to JT010 was normalized with that to AITC and is summarized. The responses of wild hTRPA1 and mTRPA1 were also included as a comparison (the same data set as Figure 1). Unpaired Student's t-test: ## p < 0.0001. The 'ns' shows no significance by the Tukey-Kramer test. Vertical bars = SEM. A recent structural analysis of hTRPA1 revealed that the phenylalanine residue at position 669 (F669 shown in yellow in Figure 4A) of hTRPA1 is critical for channel activation by JT010 [22]. Indeed, this phenylalanine is not conserved in mTRPA1 (M670 in mouse; Figure 4A). In addition, the glutamate residue at position 668 (E668 shown in yellow in Fig4A) of hTRPA1 is substituted with serine in mTRPA1 (S669, Figure 4A). Therefore, we examined the involvement of these amino acid residues with different sensitivities to JT010 between hTRPA1 and mTRPA1. As shown in Figure 5A-C, the F669M mutation in hTRPA1 significantly reduced the sensitivity to JT010, but not to AITC, suggesting that M670 in mTRPA1 renders a lower sensitivity to JT010. Particularly, the mutant hTRPA1 with F669M was insensitive to 10 nM JT010 ( Figure 5C). To further confirm the importance of F669, we applied JT010 to mTRPA1 with M670F mutation and double mutants with S669E and M670F mutations ( Figure 6A,B). As 30 µM AITC induced a similar size of TRPA1 currents at +90 and −90 mV, the response of these mutants to JT010 did not differ from that of wild mTRPA1 ( Figure 6C). Furthermore, the relative change in JT010-induced TRPA1 currents was analyzed to normalize the current amplitude against the control before application of JT010 for each TRPA1 ( Figure 6D). The single M670F mutation in mTRPA1 (M670F-mTRPA1) was not sufficient to induce JT010-sensitivity. However, the double mutations of S669E and M670F induced weak but substantial sensitivity to 30 and 100 nM JT010 at +90 mV and 100 nM at −90 mV ( Figure 6D). Nevertheless, the potency of this double mutant against JT010 was much lower than that of F669M-and wild-hTRPA1 ( Figure 6D vs. Supplementary Figure S3). control before application of JT010 for each TRPA1 ( Figure 6D). The single M670F mutation in mTRPA1 (M670F-mTRPA1) was not sufficient to induce JT010-sensitivity. However, the double mutations of S669E and M670F induced weak but substantial sensitivity to 30 and 100 nM JT010 at +90 mV and 100 nM at −90 mV ( Figure 6D). Nevertheless, the potency of this double mutant against JT010 was much lower than that of F669M-and wild-hTRPA1 ( Figure 6D vs. Supplementary Figure S3). Discussion In this study, we showed that the potent TRPA1 agonist JT010, which activated hTRPA1 at a concentration range of 0.3 to 100 nM, did not induce clear responses of mTRPA1 in HEK cells with heterologous expression of the channel. In contrast, both hTRPA1 and mTRPA1 showed similar responses to the conventional TRPA1 agonist, AITC. Moreover, JT010 induced the Ca 2+ response of endogenous TRPA1 in human FLSs with inflammation but not in mouse DRG cells. As reported, substitution of F669 in the N-terminus of hTRPA1 to methionine, homologous to mTRPA1 methionine at 670, (F669M-hTRPA1) significantly reduced the response to JT010. In contrast, while a single M670F mutation in mTRPA1 was still insensitive to JT010, the double mutant of mTRPA1 with S669E and M670F mutations induced a weak but substantial response to JT010. Taken together, JT010 is a potent TRPA1 agonist in humans, but not in mice. We confirmed that JT010, a potent TRPA1 agonist, is an effective hTRPA1 agonist. In experiments measuring Ca 2+ responses and membrane ionic currents, 10-100 nM JT010 Figure 6. S669 and M670 are the potential amino acids that determine the low sensitivity of mTRPA1 to JT010. Cells were superfused with SBS without Ca 2+ and dialyzed with a Cs-aspartate-rich pipette solution including 0.3 µM Ca 2+ . Ramp waveform voltage pulses from −110 to +90 mV for 300 ms were applied every 5 s. (A,B) JT010 was commutatively applied to HEK cells with an M670F substitution in mTRPA1 (A, 670F-mTRPA1) and double substitutions of S669E and M670F (B, 669E, 670F-mTRPA1) to examine the effects on membrane currents at −90 and +90 mV, and the pooled data of the peak currents evoked are summarized (C, four to six independent experiments including the same data set as in Figure 2F). After applying 100 nM JT010, 5 µM A96 was added to block the TRPA1 channel current components. For comparison, 30 µM AITC was used. In the middle and right panels of (A,B), the I-V relationships under each experimental condition are shown. (D) Each current amplitude shown in (C) was normalized to that of the control without JT010 and exhibited the relative amplitude change under each treatment. Dunnett's multiple comparisons test was performed for each TRPA1 gene. * p = 0.0116 and ** p = 0.00829 for 30 and 100 nM JT010, respectively in 669E, 670F-mTRPA1 (+90 mV). * p = 0.0327 for 100 nM JT010 in 669E, 670F-mTRPA1 (−90 mV). Vertical bars = SEM. Discussion In this study, we showed that the potent TRPA1 agonist JT010, which activated hTRPA1 at a concentration range of 0.3 to 100 nM, did not induce clear responses of mTRPA1 in HEK cells with heterologous expression of the channel. In contrast, both hTRPA1 and mTRPA1 showed similar responses to the conventional TRPA1 agonist, AITC. Moreover, JT010 induced the Ca 2+ response of endogenous TRPA1 in human FLSs with inflammation but not in mouse DRG cells. As reported, substitution of F669 in the N-terminus of hTRPA1 to methionine, homologous to mTRPA1 methionine at 670, (F669M-hTRPA1) significantly reduced the response to JT010. In contrast, while a single M670F mutation in mTRPA1 was still insensitive to JT010, the double mutant of mTRPA1 with S669E and M670F mutations induced a weak but substantial response to JT010. Taken together, JT010 is a potent TRPA1 agonist in humans, but not in mice. We confirmed that JT010, a potent TRPA1 agonist, is an effective hTRPA1 agonist. In experiments measuring Ca 2+ responses and membrane ionic currents, 10-100 nM JT010 induced hTRPA1-dependent responses. Indeed, JT010 did not elicit a Ca 2+ response in native HEK (Supplementary Figure S1B). Moreover, it has been reported that JT010, even at 1 µM, does not activate TRPV1, TRPV3, TRPV4, TRPM2, TRPM8, and TRPC5 [18]. While the EC 50 of JT010 against hTRPA1 was 0.65 nM in a cell-based calcium uptake assay, it was~7.6 nM in an electrophysiological study [18,22]. Comparing the pharmacological features of compounds using different assays can be challenging. In this study, the EC 50 of JT010 was estimated to be 3-10 nM against hTRPA1 in Ca 2+ measurements ( Figure 1B) and electrophysiological assays (Figures 2 and 5), strongly supporting that JT010 is a potent hTRPA1 agonist. Moreover, neither 10 nM JT010 nor 100 µM AITC elicited a Ca 2+ response in human FLSs without inflammation. In contrast, both agonists evoked clear responses in the inflammatory FLSs, implying that endogenous TRPA1 in humans can be targeted by JT010. Consistently, injection of JT010 caused pain in humans with a half-maximal effective concentration of 0.31 µM [28], suggesting that JT010 is an effective TRPA1 agonist in vivo in human. In contrast, it has not been determined that JT010 is a weak TRPA1 agonist in vivo in rodents including mouse. In this study, we confirmed that C621 and F669 are critical for the JT010-induced hTRPA1 activation. When we applied 10 nM JT010 to mutant hTRPA1 with C621S mutation, the Ca 2+ response was abolished ( Figure 4). Consistently, 10 nM JT010 failed to stimulate the Ca 2+ response in C621S mutant cells [18]. Moreover, mutant hTRPA1 with C621S was insensitive to 100 nM JT010 in whole-cell current-recording experiments [22]. Therefore, it is reasonable to assume that the primary binding site of JT010 is the cysteine residue at position 621 of hTRPA1 (Figure 7). Meanwhile, based on the two-step model proposed, whereby C621 is the primary site of electrophile modification and C665 is another modification site for full channel activation, the binding of a small electrophile, AITC, to C665 in hTRPA1 could support the full activation of TRPA1. However, it has been proposed that bulky JT010 can stabilize the open pocket by modifying C621 alone [21]. It is pharmacologically useful to compare JT010 docking sites between hTRPA1 and mTRPA1. In our preliminary docking simulation, the affinity of JT010 against hTRPA1 was weak (∆G = −5.9 kcal/mol). This low affinity cannot explain the high potency of JT010 against hTRPA1 in the previous and present experimental studies [18,22]. Because the docking sites at the highest rank simulated are different from those of the cryo-EM data, it is likely that the docking simulation is limited. It is clear that mTRPA1 is less sensitive to JT010 than hTRPA1, and JT010 lower than 100 nM hardly activates mTRPA1. Intriguingly, mTRPA1 conserves both C622 (C621 in humans) and C666 (C665 in humans), which are critical cysteines for electrophilic modifications, including those of JT010 ( Figure 4A). Suo et al. found that JT010 covalently binds to C621 of hTRPA1 and interacts with phenylalanine at 612 (F612) and tyrosine at 680 (Y680) of hTRPA1 via CH-π and sulfur-π formation through its thiazol group, respectively [22]. Moreover, the methoxyphenyl group of JT010 potentially interacts with histidine at 614 (H614), proline at 666 (P666), and F669. Mutants with serine at C621 (C621S), alanine at F612 (F612A), alanine at Y680 (Y680A), and alanine at F669 (F669A) mutations exhibited no sensitivity to 100 nM JT010. The importance of isoleucine at 623 (I623), tyrosine at 662 (Y662), and threonine at 684 (T684) of hTRPA1 against JT010 is also clear; the respective mutants dramatically reduce the response to JT010 [22]. Among these important residues, F669 alone is not conserved in mTRPA1, where the homologous residue is substituted with methionine (M670, Fig7), suggesting that this substitution lowers the sensitivity of mTRPA1 to JT010. While the mutant hTRPA1 with F669A mutation was insensitive to 100 nM JT010 [22], the mouse-type mutant with F669M in hTRPA1 retained the response to 30 and 100 nM JT010 ( Figure 5). In contrast, the M670F mutation in mTRPA1 did not induce a clear JT010-dependent response ( Figure 6). This suggests that F669 plays an important role in the response of hTRPA1 to JT010 and that M670 is not a critical determinant of lower sensitivity to JT010 in mTRPA1 (Figure 7). are colored orange and red, respectively. A close-up view of the JT010 binding sites is shown on the right. The methoxyphenyl group of JT010 potentially interacts with the phenyl of F669 in hTRPA1 in a π-π interaction manner. The coordination is shown by the dotted line with a 4.9 Å distance. Out of 100 models, the structure with the lowest zDOPE score (2.69) was adopted (see also Materials and Methods). It is clear that mTRPA1 is less sensitive to JT010 than hTRPA1, and JT010 lower than 100 nM hardly activates mTRPA1. Intriguingly, mTRPA1 conserves both C622 (C621 in humans) and C666 (C665 in humans), which are critical cysteines for electrophilic modifications, including those of JT010 ( Figure 4A). Suo et al. found that JT010 covalently binds to C621 of hTRPA1 and interacts with phenylalanine at 612 (F612) and tyrosine at 680 (Y680) of hTRPA1 via CH-π and sulfur-π formation through its thiazol group, respectively [22]. Moreover, the methoxyphenyl group of JT010 potentially interacts with histidine at 614 (H614), proline at 666 (P666), and F669. Mutants with serine at C621 (C621S), alanine at F612 (F612A), alanine at Y680 (Y680A), and alanine at F669 (F669A) mutations exhibited no sensitivity to 100 nM JT010. The importance of isoleucine at 623 (I623), tyrosine at 662 (Y662), and threonine at 684 (T684) of hTRPA1 against JT010 is also clear; the respective mutants dramatically reduce the response to JT010 [22]. Among these important residues, F669 alone is not conserved in mTRPA1, where the homologous residue is substituted with methionine (M670, Fig7), suggesting that this substitution lowers the sensitivity of mTRPA1 to JT010. While the mutant hTRPA1 with F669A mutation was insensitive to 100 nM JT010 [22], the mouse-type mutant with F669M in hTRPA1 retained the response to 30 and 100 nM JT010 ( Figure 5). In contrast, the M670F mutation in mTRPA1 did not induce a clear JT010-dependent response ( Figure 6). This suggests that F669 plays an important role in the response of hTRPA1 to JT010 and that M670 is not a critical determinant of lower sensitivity to JT010 in mTRPA1 (Figure 7). Although the mTRPA1 double mutant with S669E and M670F mutations induced small responses to JT010, its potency was still significantly lower than that of hTRPA1 are colored orange and red, respectively. A close-up view of the JT010 binding sites is shown on the right. The methoxyphenyl group of JT010 potentially interacts with the phenyl of F669 in hTRPA1 in a π-π interaction manner. The coordination is shown by the dotted line with a 4.9 Å distance. Out of 100 models, the structure with the lowest zDOPE score (2.69) was adopted (see also Materials and Methods). Although the mTRPA1 double mutant with S669E and M670F mutations induced small responses to JT010, its potency was still significantly lower than that of hTRPA1 ( Figure 6D vs. Supplementary Figure S3). As the double mutant included all crucial amino acid residues for the interaction with JT010 proposed, it is notable that the interaction of JT010 with these residues is not sufficient to explain the activation mechanism of TRPA1 by lower JT010 concentration. In contrast, AITC (30 µM) induced large membrane currents of wild-type and all mutant TRPA1s in humans and mice in this study, indicating that large electrophiles like JT010 may have additional interactions with TRPA1. TRPA1 interacts differently with agonists and antagonists from species to species via distinctive molecular mechanisms. A96, a potent TRPA1 antagonist in humans and mice, is a TRPA1 agonist in chicken [29]. Moreover, menthol, a non-electrophile agonist of hTRPA1, inhibits mTRPA1 [23,30]. Particularly, when bulky electrophilic compounds are used, there may be species differences in the response of TRPA1. The basal channel activity was different between hTRPA1 and mTRPA1 under the intracellular dialysis of 0.3 µM Ca 2+ ( Figure 2B,D, 1520.2 ± 301.8 pA vs. 328.35 ± 74.73 pA in mTRPA1 and hTRPA1 at +90 mV, respectively, p < 0.01; −629.55 ± 210.1 pA vs. −87.38 ± 32.58 pA in mTRPA1 and hTRPA1 at −90 mV, respectively, p < 0.05). It is unlikely that this basal activity affected the interaction with JT010 in mTRPA1. Indeed, JT010 at concentrations under 100 nM did not affect mTRPA1, even in the absence of intracellular Ca 2+ , where the basal channel activity was lower (Supplementary Figure S2). When applied to DRG cells isolated from mice, 10 nM of JT010 did not induce any response. Because 100 µM AITC evoked a response in 88% of cells employed, mTRPA1 expression was apparent, suggesting that a lower concentration of JT010 cannot activate endogenous mTRPA1. Notably, the application of 1000 nM JT010 induced a small but substantial Ca 2+ response in HEK-mTRPA1 cells (Supplementary Figure S1C), possibly indicating the interaction of JT010 with C622 of mTRPA1. Nevertheless, it is clear that mTRPA1 is relatively resistant to lower JT010. In conclusion, we showed that the potent hTRPA1 agonist JT010 could not activate mTRPA1 at concentrations ranging from 0.3 to 100 nM. Moreover, JT010 induced the response of endogenous TRPA1 in human FLSs with inflammation, but not in mouse DRG cells, both of which were sensitive to AITC. As confirmed by the importance of F669 in the N-terminus of hTRPA1 for the JT010-interaction, methionine substitution, which is homologous to mTRPA1 M670, significantly reduced the response to JT010. In contrast, while a single M670F mutation in mTRPA1 was still insensitive to JT010, the double mutant mTRPA1 with S669E and M670F mutations evoked a weak but substantial response to JT010. Taken together, JT010 is a potent TRPA1 agonist in humans, but not in mice. Cell Culture HEK cells obtained from the Health Science Research Resources Bank (HSRRB, Osaka, Japan) were maintained in Dulbecco's modified Minimum Essential medium (D-MEM; Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10% heat-inactivated fetal calf serum (FCS; Sigma-Aldrich), penicillin G (100 U/mL, Meiji Seika Pharma Co., Ltd., Tokyo, Japan), and streptomycin (100 µg/mL, Meiji Seika Pharma Co., Ltd., Tokyo, Japan). Human FLSs, which were purchased from Cell Applications (San Diego, CA, USA), were cultured in Synoviocyte Growth medium containing 10% growth supplement, 100 U/mL penicillin G, and 100 µg/mL streptomycin, as described previously [26]. FLSs were maintained at 37 • C in a 5% CO 2 atmosphere. After they reached 70-80% confluence, FLSs were reseeded once every 10 days until nine passages were completed. The cells that grew with a doubling time of 6-8 days after this stage comprised a homogenous population, in which TRPA1 transcriptionally induced by IL-1αwas found to be unaffected. For the experiments, reseeded cells were cultured for 16 days and then exposed to IL-1α or vehicle. Cell Isolation from DRG in Mice This study was approved by the Animal Care Committee of Aichi Gakuin University (approval code 21-036 and 22-007) and conducted per the Guiding Principles for the Care and Use of Laboratory Animals approved by the Japanese Pharmacological Society. Male mice weighing 20-30 g were anesthetized with isoflurane and decapitated. Four to five DRGs isolated were washed in phosphate-buffered solution (PBS [in mM]: NaCl 137, KCl 5.4, MgCl 2 1.2, CaCl 2 2.2, Na 2 HPO 4 0.168, KH 2 PO 4 0.44, glucose 5.5, NaHCO 3 4.17, pH7.45) and treated with Ca 2+ -Mg 2+ -free PBS containing 0.05% collagenase (Amano, Nagoya, Japan) and 0.05% dispase (Boehringer Mannheim, Tokyo, Japan). All DRGs were kept in an incubator at 37 • C for 60 min and then the enzyme solution containing isolated DRGs was centrifuged at 1200× g rpm for 10 min. Thereafter, the supernatant was removed and the pellet was resuspended in culture medium (D-MEN with 10% FCS) and gently agitated with a fire-polished wide-pore pipette. Isolated DRG cells were allowed to attach to gelatin-coated glass coverslips in a 35 mm dish and were cultured for 24-48 h at 37 • C in a 5% CO 2 atmosphere, and used within 48 h. Patch-Clamp Experiments Whole-cell current recordings were performed as previously described [31]. The resistance of the electrodes was 3-5 MΩ when filled with pipette solution. The Cs + -rich pipette solution contained [in mM] Cs-aspartate 110, CsCl 30, MgCl 2 1, 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES) 10, ethylene glycol tetra-acetic acid (EGTA) 10, and Na 2 ATP 2 [adjusted to pH 7.2 with CsOH]. To maintain the activity of TRPA1 currents, intracellular free Ca 2+ concentration was adjusted to a pCa value of 6.5 (0.3 µM Ca 2+ ) by adding CaCl 2 to the pipette solution. In some experiments, the EGTA concentration in the pipette solution was reduced to 1 mM in the absence of CaCl 2 . The membrane currents and voltage signals were digitized using an analog-digital converter (PCI-6229, National Instruments Japan, Tokyo, Japan). WinWCPV5.52 software was used for data acquisition and analysis of whole-cell currents (developed by Dr. John Dempster, University of Strathclyde, UK). The liquid junction potential between the pipette and bath solutions (−10 mV) was calculated. A ramp voltage protocol from −110 mV to +90 mV for 300 ms was applied every 5 s from a holding potential of −10 mV. The leak current component was not subtracted from the recorded current. A standard HEPES-buffered bathing solution (SBS [in mM]: NaCl 137, KCl 5.9, CaCl 2 2.2, MgCl 2 1.2, glucose 14, and HEPES 10 [adjusted to pH 7.4, with NaOH]) was used. All experiments were performed at 25 ± 1 • C. Measurement of Ca 2+ Fluorescence Ratio HEK, FLSs, and DRG cells, which were loaded with 10 µM Fura2-AM (Dojindo, Kumamoto, Japan) in SBS for 30 min at room temperature, were superfused with SBS for 10 min, and Fura-2 fluorescence signals were measured at 0.1 Hz using the Argus/HisCa imaging system (Hamamatsu Photonics, Hamamatsu, Japan) driven by Imagework Bench 6.0 (INDEC Medical Systems, Santa Clara, CA, USA). Since the efficacy of gene transfection in HEK cells and the TRPA1 expression level in FLSs and DRG cells were similar but not identical from cell to cell, we collected 50, 5-11, and 8-12 single cells of HEK, FLSs, and DRG cells, respectively, on one coverslip to obtain the average response. We repeated the same protocol with other coverslips to obtain the mean and standard error of the mean (SEM) of independent experiments. In each analysis, the whole cell area was chosen as the region of interest to average the fluorescence ratio. Modeling Molecular modeling was performed on UCSF Chimera v1.16 [32] using the modeling software MODELLER v10.3 [33,34]. Using the amino acid sequence of mTRPA1 (ID: Q8BLA8) from UniProt (https://www.uniprot.org/ accessed on 30 August 2022), we modeled the structure of mTRPA1 as a monomer, with JT010-bound hTRPA1 (PBD ID:6PQO) as a template. Of the 100 models, the structure with the lowest normalized Discrete Optimized Protein Energy (zDOPE) score was adopted (Figure 7). The software Autodock-Vina was used to predict the possible binding models of JT010 to hTRPA1 (200 models) and the solutions were ranked according to their binding energy [35]. The grid box for docking model was set to locate C621 at the center and to include all amino acid residues interacted with JT010 (C665, F621, Y680, T684, Y662, I623, and F669 [22]).
v3-fos-license
2016-05-04T20:20:58.661Z
2014-08-15T00:00:00.000
12696784
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://ccforum.biomedcentral.com/track/pdf/10.1186/s13054-014-0468-2", "pdf_hash": "379bba05691ad18ed5a88a161a04eae1ec9a73e2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:16", "s2fieldsofstudy": [ "Medicine" ], "sha1": "002e8ef4db94e49ac2454831349fa64718d579ba", "year": 2014 }
pes2o/s2orc
A specialized post-anaesthetic care unit improves fast-track management in cardiac surgery: a prospective randomized trial Introduction Fast-track treatment in cardiac surgery has become the global standard of care. We compared the efficacy and safety of a specialised post-anaesthetic care unit (PACU) to a conventional intensive care unit (ICU) in achieving defined fast-track end points in adult patients after elective cardiac surgery. Methods In a prospective, single-blinded, randomized study, 200 adult patients undergoing elective cardiac surgery (coronary artery bypass graft (CABG), valve surgery or combined CABG and valve surgery), were selected to receive their postoperative treatment either in the ICU (n = 100), or in the PACU (n = 100). Patients who, at the time of surgery, were in cardiogenic shock, required renal dialysis, or had an additive EuroSCORE of more than 10 were excluded from the study. The primary end points were: time to extubation (ET), and length of stay in the PACU or ICU (PACU/ICU LOS respectively). Secondary end points analysed were the incidences of: surgical re-exploration, development of haemothorax, new-onset cardiac arrhythmia, low cardiac output syndrome, need for cardiopulmonary resuscitation, stroke, acute renal failure, and death. Results Median time to extubation was 90 [50; 140] min in the PACU vs. 478 [305; 643] min in the ICU group (P <0.001). Median length of stay in the PACU was 3.3 [2.7; 4.0] hours vs. 17.9 [10.3; 24.9] hours in the ICU (P <0.001). Of the adverse events examined, only the incidence of new-onset cardiac arrhythmia (25 in PACU vs. 41 in ICU, P = 0.02) was statistically different between groups. Conclusions Treatment in a specialised PACU rather than an ICU, after elective cardiac surgery leads to earlier extubation and quicker discharge to a step-down unit, without compromising patient safety. Trial registration ISRCTN71768341. Registered 11 March 2014. Introduction Anaesthesia for cardiac surgery has traditionally been provided with high-dose opioids and long-acting muscle relaxants, in the belief this technique was associated with optimal haemodynamic stability. The resulting prolonged postoperative ventilation and intensive care unit (ICU) length of stay (LOS) were considered acceptable compromises. Rising costs and the need for faster ICU turnover due to increased demand and reduced resources led to reducing the length of ICU stay after cardiac surgery [1,2]. Since the mid-1990s, intensified postoperative rehabilitation has established itself as the optimal approach to patient recovery. Fast-track treatment has become a popular and accepted standard because it allows for early extubation within six hours and consequently reduced LOS in the ICU and hospital [3][4][5]. A significant reduction in time to extubation (ET) without compromising patient's safety has been demonstrated in numerous studies [5][6][7][8][9][10][11]. Zhu et al. described in Cochrane Database Systematic Review a mean reduction of 5.99 hours (2.99 to 8.99 hours) due to implementation of a time-directed extubation protocol without increasing the risk of postoperative complications compared to standard care. Low-dose opioid anaesthesia will reduce mean ET by 7.40 hours (10.51 to 4.29 hours) compared to high-dose opioid anaesthesia [11]. Implementation of a dedicated fast-track protocol that allows not only for earlier extubation but also for earlier transfer from the ICU or post-anaesthetic care unit (PACU) to a step-down unit has been shown to be very effective in reducing ICU-LOS and the total length of hospital stay in retrospective studies [5,6,8]. Zhu et al. showed in a review that low-dose opioid anaesthesia was associated with 3.7 hours (−6.98 to −0.41) lower ICU LOS. Time-directed extubation protocols had 5.15 hours (−8.71 to −1.59) shorter length of stay in the ICU (0.4 to 8.7 hours) compared to conventional groups, as Zhu et al. described, although LOS in hospital was similar in both groups [11]. Utilised in combination, this approach has been associated with both significant cost savings, and also increased ICU bed capacity [12]. Most fast-track treatment protocols for cardiac surgery patients to date, however, have been implemented within the conventional ICU setting. In general, it is possible to perform an extubation in the operating room (OR) with selected patient groups (OPCAB, MIDCAB and so on). This could make sense if no postoperative care unit is available or the fast-track concept is not continued at the ICU. There is still an ongoing discussion about the advantage of an early extubation in the OR. Straka et al. and Montes et al. were not able to show a reduced ICU LOS in cardiac surgery patients who get extubated in the OR [13,14]. Chamchad et al. found in a non-randomized observational study shorter ICU and hospital LOS. With an average ICU LOS of 27 hours, this study showed no additional benefit compared to early extubation in a PACU/ICU [15]. Nicholson et al. investigated in a randomized trial the effect of immediate extubation after coronary artery bypass graft (CABG) surgery compared to at least three hours ventilation before starting weaning on the pulmonary function. The study was performed in a PACU. They concluded that early extubation will not affect pulmonary function after extubation [16]. Our fast-track concept consists of direct postoperative treatment in a PACU with the primary goal of early extubation, followed by transfer to a step-down unit as soon as specific discharge criteria are met [6]. To the best of our knowledge, no prospective randomized study has been published which compares fast-track treatment in the ICU versus fast-track treatment in the PACU. The hypothesis of the study was that patients treated in the PACU would be extubated earlier, and be discharged to a step-down unit earlier than patients treated in the ICU. Accordingly, the objectives of our study were to compare ET and LOS in the PACU or ICU. Methods The study was approved by our local ethics committee (Ethics Committee, Medical Faculty, University of Leipzig, 04107 Leipzig, Reference number 097-2008, trial registration number ISRCTN71768341, http://www.controlled-trials.com/ISRCTN71768341/, registered 11 March 2014), and was conducted as a prospective, randomized, single-blinded, single-centre trial. For each patient, written informed consent was obtained prior to any protocol-related activities. As part of this procedure, the principal investigator or designee explained orally and in writing the nature, duration, and purpose of the study in such a manner that the patient was aware of the potential risks, inconveniences, or adverse effects that may occur. The patient was informed that he/she was free to withdraw from the study at any time. The patient received all information that was required by local regulations and International Conference on Harmonisation (ICH) guidelines. During the premedication visit the day before surgery, every patient scheduled to undergo CABG, valve surgery, or combined CABG and valve surgery was screened for inclusion in the study ( Figure 1). Patients who were in cardiogenic shock, were dialysis dependent, or had an additive EuroSCORE of more than 10 were excluded. The final decision for including or excluding the patient into the fast-track concept was taken by consensus decision between the attending anaesthesiologist and cardiac surgeon at the end of their surgery. Inclusion criteria were: haemodynamically stable (systolic blood pressure >90 mmHg and heart rate <120 bpm; adrenaline or noradrenaline <0.04 mcg/kg/min), normothermic (>36°C core body temperature), and no bleeding. Exclusion criteria followed risk factors identified by Constantinides et al. and Akhtar et al. [17,18]: impaired left ventricular function (ejection fraction below 35%), cardiac assist devices pre-or postoperative and cardiopulmonary instability postoperative (high inotropic support, lactate >5 mmol/l, Horowitz index below 200) After the decision to include the patient into the study, the patient was randomized to either postoperative care in the PACU (n = 100) or ICU (n = 100). For that purpose an envelope was picked out of a box containing 200 sealed envelopes (100 for PACU, 100 for ICU admission) and removed from the box subsequently. A further intra-operative exclusion criterion was lack of an available bed in either the PACU or ICU. In such cases, the patient was not randomized, but was sent to the unit with an available bed, and excluded from the study and further analysis. The medical and nursing staff in the ICU and PACU had been informed about the design and the conduct of the study but were not informed as to which patients were enrolled in the study. Data collection and analysis was performed by an independent person who was not part of the anaesthetic, surgical or ICU team, and who was not blinded to treatment allocation. Fast-track anaesthesia protocol Anaesthetic management consisted of oral premedication with clorazepate dipotassium (20 to 40 mg) the evening before and midazolam (3.75 to 7.5 mg) on the day of surgery. Anaesthesia was induced with fentanyl (0.2 mg) and propofol (1.5 to 2 mg/kg). A single dose of rocuronium (0.6 mg/kg) was used to facilitate intubation. Analgesia was maintained throughout the case with a continuous infusion of remifentanil (0.2 mcg/kg/min), and for hypnosis during the pre-and post-cardiopulmonary bypass (CBP) period sevoflurane (0.8 to 1.1 minimum alveolar concentration (MAC)) was administered whereas during CPB a continuous propofol infusion (3 mg/kg/h) was used. A recruitment manoeuvre was carried out prior to weaning from CPB in order to prevent atelectasis. An external convective warming system with an underbody blanket (Bairhugger™, Arizant Healthcare; Eden Prairie, MN, USA) was used after weaning from CPB to ensure a core temperature of at least 36°C was maintained. For early postoperative analgesia, 1 g paracetamol was administered intravenously to each patient before skin closure. In difference to other studies, we did not include all patients or selected fast-track patients only preoperatively. All patients received the fast-track anaesthesia in the OR. We carefully selected fast-track patients at the end of surgery following the criteria identified as risk factors for fast-track failure [1,17,18]. The final decision to continue the fast-track protocol postoperatively was taken after the end of surgery. As our primary end point was postoperative ventilation time, we defined fast-track failure as postoperative ventilation of more than six hours. That was decided due the literature research where it ranged between three and nine hours [19,20]. Treatment in PACU All patients were transferred to the PACU intubated, mechanically ventilated with a remifentanil infusion of 0.1 mcg/kg/min. Administration of hypnotic agents was discontinued in the OR. Postoperative analgesia consisted of an bolus of piritramide (0.1 mg/kg) on discontinuation of the remifentanil infusion, followed by bolus doses as required in 2 to 4 mg aliquots, plus regular paracetamol (1 g every six hours) to achieve a pain score between 2 and 4 on an analogue pain scale from 0 to 10. Patients were extubated when they were conscious and obeyed commands, had stable spontaneous ventilation with pressure support of 10 to 12 cmH 2 O, positive end-expiratory pressure (PEEP) of 5 cmH 2 O, fraction of inspired oxygen (FiO 2 ) of ≤0.4, were haemodynamically stable, not bleeding (≤100 ml/h), and with no significant electrocardiographic abnormalities. All patients received non-invasive bi-level positive airway pressure ventilation via a face mask for one hour (Elisee 350™, Saime, Savigny-le-Temple, France), immediately after extubation. Initially non-invasive ventilation was commenced at a pressure support of 10 to 15 cm H 2 O and a PEEP of 5 cmH 2 O. The FiO 2 was 0.4. During the period of non-invasive ventilation the pressure support was adapted to patients' needs. Criteria for discharge to the intermediate care unit (IMC) were that patients must be awake, cooperative, haemodynamically stable (without inotropes) and have both acceptable respiratory pattern and blood gas analysis (pO 2 > 70 mmHg, pCO 2 < 50 mmHg). Chest-X-ray and electrocardiogram were performed in all patients to exclude major pathology. The physician-to-patient ratio and the nurse-to patientratio were 1:3. The PACU operated daily Monday to Friday from 10:00 to 18:30. Treatment in ICU All patients arrived in the ICU intubated, mechanically ventilated with a remifentanil infusion of 0.1 μg/kg/min. Administration of hypnotic agents was discontinued in the OR. Postoperative analgesia consisted of a bolus of piritramide (0.1 mg/kg) on discontinuation of the remifentanil infusion, followed by bolus doses as required in 2 to 4 mg aliquots, plus regular paracetamol (1 g every six hours). A pain scale was not used on a regular basis for assessing pain. The need for an analgesic medication was estimated by nurses. Extubation criteria were identical to those in the PACU. Non-invasive ventilation after extubation was not implemented routinely. Further treatment in the ICU was determined by the ICU physician according to German guidelines for intensive care treatment in cardiac surgery patients [21]. Criteria for suitability to transfer to IMC were identical to those in the PACU. The physician-to patient-ratio was 1:12 and the nurseto-patient ratio was 1:2. Substantial differences in PACU and ICU treatment are listed in Table 1. Outcomes Primary end points were ET and PACU/ICU LOS. Secondary outcome measures were hospital LOS, overall length of intensive care treatment (total ICT LOS), in-house mortality, low cardiac output, new-onset cardiac arrhythmia, respiratory failure requiring prolonged ventilation or reintubation and incidences of surgical re-exploration and renal failure. PACU/ICU LOS is defined as LOS in the PACU or ICU from the end of surgery until discharge to another unit. Additionally, secondary PACU/ICU LOS includes readmissions from step-down units to ICU as well as additional ICU time after transfer from the PACU to ICU based on medical or organisational circumstances. IMC LOS is defined as LOS in IMC until discharge to a general ward. Primary ICT LOS is defined as overall length of intensive care treatment (ICT) in PACU/ICU + IMC. Total ICT LOS is defined as overall length of ICT in the PACU + ICU + IMC including readmission to a unit of higher care grade than a general ward and transfer from the PACU to the ICU. If patients were transferred from the PACU to the ICU in case of medical or organizational circumstances, they were still analysed as being in the PACU group, although additional ICU LOS was not calculated in PACU/ICU LOS but in secondary PACU/ICU LOS and total ICT LOS. PACU patients who had to stay past 18:30 were admitted to the ICU for further treatment and were evaluated as described above. Low cardiac output was defined as central venous saturation of <65% with a haematocrit of >30%. Cardiac arrhythmia included atrial fibrillation and atrioventricular block. Acute renal failure was defined as an increase in postoperative serum creatinine of at least three times the preoperative value, or a serum creatinine >150 μmol/l. Stroke was defined as a new transient or permanent motor or sensory deficit of central origin or unexplained coma. Statistical analysis Sample sizes were calculated on the basis of data from a previous retrospective study at our institution [6] using SPSS 16.0 (SPSS Inc, Chicago, IL, USA). Using this data, we estimated that ET in the ICU compared to ET time in the PACU would occur four hours later and that the standard deviation would be approximately 500 min. We calculated that 93 patients per group would be required to demonstrate a significant reduction in ET with a power of 90% at significance level of 5%. Accounting for drop-outs and incomplete data, we aimed to recruit 100 patients per group. Comparisons between the two independent groups (ICU vs. PACU) were performed using the Mann-Whitney U test for continuous data, Mantel-Haenzsel test for categorically ordered data (for example New York Heart Association (NYHA) score) and Fisher's exact test for binary data (for example adverse events). A threshold of 0.05 was considered as significant. All analyses were performed using SPSS 18.0. Continuous parameters were described by median and interquartile range. Categorical data are described by class-wise allocation numbers. Binary data are described as number of events. The primary end point of this study was time to extubation. We have not adjusted for multiple testing, so other comparisons are considered explorative. Results A total of 423 patients consented to participate in the study. All patients were scheduled for CABG, aortic valve replacement (AVR), mitral valve repair/replacement (MVR) or a combination of these procedures ( Table 2). A total of 223 patients were excluded intraoperatively, due to a lack of capacity in either the ICU or PACU (n = 171), or because they were considered unsuitable for fast-track management at the end of their surgery, according to our criteria listed above (n = 52). A total of 200 patients were therefore included in the study from May 2008 until July 2009, 100 in each group. There were significantly more female patients in the PACU group (36 vs. 22, P = 0.04) ( both groups are listed in Table 2. There was no significant difference in type of surgery. Time to extubation The median extubation time in PACU group was significantly shorter than in the ICU group (90 min [50; 140] vs. 478 min [305; 643]; P <0.001; Figure 2, Table 4). In the PACU group 97% of the patients were extubated within six hours of admission whereas only 33% of the patients in the ICU group fulfilled the criteria for successful fast-tracking (P <0.001) [5]. In the PACU group five patients required reintubation (three for resurgery, one because of a convulsion, and one for respiratory failure) compared to ten patients in ICU group (five for re-operation, four for respiratory failure, one for cardiopulmonary resuscitation Figure 3, Table 4). The median LOS in the IMC was 23.0 hours [19.9; 41.8] in the PACU group and 21.0 hours [10.5; 28.8] in the ICU group (P <0.004). Overall length of ICT in the PACU + ICU + IMC including readmission to a unit of higher care grade than a general ward and transfer from the PACU to the ICU was 30.9 hours [23.9; 59.9] for patients in the PACU group compared to 43.9 hours [24.9; 65.4] for patients in the ICU group (P = 0.08; Figure 4, Table 4). There was no significant difference in median hospital LOS for the PACU group (9 [8; 11]) vs. the ICU group (9 [8; 12] days). Ninety-one of 100 patients in PACU group were discharged to intermediate care unit whereas nine patients had to be admitted from the PACU to the ICU (Figure 1). Three of these were extubated and haemodynamically stable, and were admitted to the ICU because of lack of available beds in IMC, two patients because of failure to extubate, two patients because of bleeding, and two patients because of cardiac arrhythmia. Four patients in the PACU group had to be admitted from IMC to the ICU (two because of re-thoracotomy, one because of haemodynamic instability, and one because of respiratory failure). A total of 87% of all patients in the PACU group did not require any treatment in the ICU. Readmission from the general ward to IMC occurred in 13 patients of the PACU group, and was due to: cardiac arrhythmia (n = 4), pleural effusion (n = 5), pneumothorax (n = 2), resurgery (n = 1), and pain control (n = 1), no patient in the PACU group discharged to the ward required readmission to ICU. In the ICU group five patients required readmission from IMC to ICU, because of respiratory failure (n = 4) and cardiac arrest (n = 1). Two patients were readmitted from the general ward to ICU because they required resurgery. Furthermore, in the ICU group eight patients had to be readmitted from the general ward to IMC because of cardiac arrhythmia (n = 5), neurological deficit (n = 2) and pericardial effusion (n = 1). Postoperative complications Postoperative complications for both groups are listed in Table 5. The occurrence of arrhythmias was significantly lower in the PACU group as compared to the ICU group (25 vs. 41, P = 0.02). There was no significant difference in the rate of pleural or pericardial effusions requiring intervention, renal insufficiency or cerebrovascular stroke. The number of patients requiring resurgery (PACU n = 5 vs. ICU n = 11, P = 0.19) was lower in the PACU group (two for implantation of a pacemaker, two for drainage of a haemothorax, one for thrombectomy for deep vein thrombosis) compared to the ICU group (five for drainage of a haemothorax, two for revision of valve after valve replacement, two for implantation of a pacemaker, one thoracotomy for bleeding followed by insertion of extracorporeal membrane oxygenation after resurgery, one for refixation of the sternum). One patient from the PACU group required ventilation longer than 24 hours vs. seven patients in ICU group (P = 0.07). Discussion In our study, we have shown that fast-track treatment of cardiac surgery patients in a dedicated PACU compared to fast-track treatment in the ICU significantly reduces ET (90 vs. 478 min; P <0.001) as well as time to transfer to a step-down unit (LOS PACU 3.3 hours compared to 17.9 hours LOS ICU). We were able to demonstrate a reduction of ventilation time and a significantly reduced utilisation of ICU capacity after cardiac surgery. Although we did not calculate the cost savings, Cheng et al. have clearly shown that early extubation results in reduced costs and better resource utilisation [4]. Hantschel et al. have also demonstrated that postoperative treatment in a PACU after cardiac surgery results in a 52% cost reduction compared to conventional ICU treatment [12]. Opening a PACU for 8.5 hours a day should lead to reduced personnel costs compared to a 24-hour ICU. An ET of less than six hours after cardiac surgery is considered an important criterion for successful fasttracking after cardiac surgery [4,5]. In the PACU group 97% of the patients fulfilled this criterion but only 33% in the ICU group (P <0.001). In a recent review, Zhu et al. showed that using a low-dose-opioid anaesthesia reduces ventilation times by 7.40 hours. Using a weaning protocol reduced ventilation times by 5.99 hours. In our study, we were able to reduce ventilation times by 6.46 hours, which is comparable to the reduction reported in other studies [11]. Our protocol used low-dose opioid anaesthesia with the short-acting opioid remifentanil. We defined a weaning protocol, which included early stop of anaesthesia, a protocol-driven postoperative pain management and non-invasive ventilation after extubation for at least 60 minutes. Another fast-track criterion is reduced LOS in ICU, usually defined as less than 24 hours [5]. According to this criterion, successful fast-track-treatment was achieved in 95% of the PACU patients compared to 71% patients in the ICU group (P <0.001). Zhu et al. reported in a review a reduction in ICU LOS for low-dose-opioid anaesthesia of 3.7 hours (−6.98 to −0.41) and by using a weaning protocol of 5.15 hours (−8.71 to −1.59) compared to highdose-opioid anaesthesia [11]. In our study, we achieved a reduction in PACU/ICU LOS by 14.6 hours to 3.3 hours. This early discharge to a step-down unit allows using an ICU bed more than once a day. Gooch et al. developed a model of demand elasticity of ICU bed utilization [22]. The authors discussed that ICU beds created their own demand [23]. Under the model of demand elasticity the case mix of patients in the ICU changed depending on bed availability. If enough beds are available or no actual patient needs an ICU bed, it is more likely that patients in the ICU who are not as critically ill do not benefit from ICU stay [23]. By bypassing the ICU for fast-track patients, we possibly reduced this effect of demand elasticity and were able to show a reduction in ICU bed utilization. Still, if we included the readmission and direct transfers from the PACU to the ICU, we found a significant reduction for ICU LOS of 14.4 hours (secondary ICU LOS PACU vs. ICU 3.5 to 17.9 hours). Published figures for fast-track failure rates range from 11% to 49% depending on the patient population [17,18,24]. In contrast to studies that included all patients undergoing cardiac surgery, our study population was preselected according to our existing fast-track protocol. We primarily excluded patients with a defined risk for fast-track failure during the premedication visit (patients who were scheduled for emergency surgery, were in cardiogenic shock, were dialysis dependent, or had an additive EuroSCORE of more than 10) [1,17,25]. Another explanation for the low fast-track failure rate of 5% for the PACU group is the fact that the final decision for inclusion of the patient to fast-track treatment was made at the end of the surgery. Wong et al. identified need for inotropic support and bleeding as risk factors for delayed extubation as well as delayed LOS in ICU [26]. In our study, 52 out of the 423 patients primarily included were excluded before randomisation because of hemodynamic instability or bleeding at the end of the operation. This underlines the hypothesis that not only careful preselection of potential fast-track patients during the premedication visit is important, but also that re-evaluation of patient suitability at the end of the operation can lead to a reduction of fast-track failure. The relatively high fast-track failure rate for the ICU group (67% time to extubation >6 hours and 29% PACU/ICU LOS >24 hours) may be attributable to several factors: first, the much better physician-to-patient ratio in the PACU (1:3 in the PACU vs. 1:12 in the ICU) allows the physician to effectively implement and manage an early goal-directed therapy. Since the study from Rivers et al. in septic patients we know that early hemodynamic stabilisation is beneficial for the patient and this is certainly also true for cardiac fast-track patients [27]. Several other studies have shown that an early goal-directed fluid management in postoperative cardiac surgery patients results in an improved hemodynamic stability and can reduce ventilation time and ICU LOS [28,29]. Second, due to the fact that one physician in the ICU cares for 12 patients the preselected fast-track patients will not get the same attention as the patient who really needs ICT. One to two severely compromised patients out of the 12 will result in the fact that weaning of the fast-track patient on ICU will be delayed. Kumar et al. have shown that the presence of an intensivist results in reduced ETs [30]. Third, the limited opening times for the PACU may positively motivate the involved staff to treat the patients optimally including early extubation and hemodynamic and respiratory stabilisation so that the patient can be transferred to the IMC for further treatment. Also, the more focused adherence to the fast-track and enhanced-recovery principles including specifications for medication, postoperative pain control and discharge criteria favours the PACU compared to the ICU. van Mastrigt et al. showed in a meta-analysis that a defined weaning-and-extubation protocol is an important key to reduced intensive care LOS [10]. Although this protocol was the same for the PACU and the ICU, the more disciplined execution of the fast-track protocol and application of non-invasive ventilation in our PACU might be another important factor for success of early extubation. In a prospective randomized study, Zarbock et al. demonstrated a significant reduction in reintubation and readmission to ICU/IMC in cardiac surgery patients using continuous positive airway pressure therapy [31]. We found a lower incidence of reintubation in the PACU with 2.5% (five) vs. 5% in the ICU (ten) patients and a lower readmission rate of the PACU (four) vs. the ICU (seven) patients from step-down unit (IMC) to the ICU without reaching significance. Zhu et al. reported a risk of reintubation in the fast-track group of 1.4% and in the conventional group of 1.7%, [11], which is lower as in our study. However, this study is underpowered to allow any conclusion to the reintubation rate compared to other studies. The incidences of low cardiac output syndrome, prolonged respiratory insufficiency, cardiac arrest, and death tended to be lower in the PACU group without reaching statistical significance. Because these complications were not primary end points, our study was underpowered for demonstrating significant differences between groups. The incidence of renal failure, stroke, resurgery, and mortality was similar for the PACU and the ICU group. Our study does not allow any conclusion about the safety of our fast-track concept. However, a significantly lower incidence of common postoperative complications for fast-track patients was demonstrated in a prospective study of 1,488 patients by Gooi et al. [3]. Svircevic et al. could not find any evidence for increased risk of adverse outcomes in 7,989 patients undergoing fast-track cardiac surgery [5]. In a recent review, Zhu et al. came to the conclusion that fast-track interventions have similar risks of mortality and major postoperative complications to conventional (not fast-track) care, and therefore appear to be safe in patients considered to be at low to moderate risk [11]. In contrast to other studies on fast-track in cardiac surgery, which included only patients undergoing coronary artery bypass surgery, our patient population was mixed regarding type of operations [4,10,19,32]. More than half of our patient population underwent valve surgery, some of them in combination with CABG. Overall, in our patient population of n = 200 patients only 41.5% were CABG patients (41 vs. 42). A total of 6.5% of all patients (four vs. nine) underwent combined procedures (for example aortic and mitral valve surgery or valve surgery and CABG). We have also shown that fast-track treatment utilising a dedicated PACU can be successfully implemented for different types of cardiac operations. Limitations of the study Our demographic data show that there is a significant difference in gender (more female patients in the PACU group). In several studies, female gender was found to be a risk factor for delayed postoperative extubation and prolonged ICU length of stay [1,26]. This might have favoured the ICU group. Anaesthesia and surgery time in the ICU group was significantly longer, but there was no difference in XCL and cardiopulmonary bypass time, which were (amongst others) identified as risk factors for delayed postoperative extubation (>6 hours) and prolonged ICU LOS (>24 hours) [1,26]. Regarding anaesthesia and surgery time, we observed only weak correlations with our outcome variables in both PACU and ICU groups. Hence, it is unlikely that this imbalance in baseline characteristics affects the main conclusion of our study. Regarding the adverse events, the study was not adequately powered to identify significant differences between the groups. Conclusions Our study showed that our fast-track treatment in a dedicated PACU leads to a high rate of success (95%) compared to the ICU (33%). We attribute this difference to better physician-to-patient ratio, allowing for more focused, early postoperative management, and better adherence to an established fast-track protocol. Delaying the decision about patient suitability for fast-track treatment until the end of surgery may also contribute to reducing the incidence of fast-track failures. Running a PACU separated from the ICU in a different part of the hospital, an excellent physician-patient ratio and strong adherence to the fast-track protocol is from our point of view one of the success factors for our study. Key messages ET for cardiac surgery patients in a fast-track protocol is significantly shorter in a dedicated PACU than in ICU PACU-LOS is significantly shorter than ICU-LOS
v3-fos-license
2019-05-27T13:20:36.389Z
2019-05-27T00:00:00.000
165164107
{ "extfieldsofstudy": [ "Psychology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2019.00524/pdf", "pdf_hash": "5f1435327095b3cf1d2b028afb2ab86bd650d882", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:17", "s2fieldsofstudy": [ "Psychology" ], "sha1": "5f1435327095b3cf1d2b028afb2ab86bd650d882", "year": 2019 }
pes2o/s2orc
A Simple Target Interception Task as Test for Activities of Daily Life Performance in Older Adults Previous research showed that a simple target interception task reveals differences between younger adults (YA) and older adults (OA) on a large screen under laboratory conditions. Participants intercept downward moving objects while a horizontally moving background creates an illusion of the object moving in the opposite direction of the background. OA are more influenced by this illusory motion than YA. OA seem to be less able to ignore irrelevant sensory information than YA. Since sensory integration relates to the ability to perform Activities of Daily Living (ADL), this interception task can potentially signal ADL issues. Here we investigated whether the results of the target interception task could be replicated using a more portable setup, i.e., a tablet instead of a large touch screen. For YA from the same, homogeneous population, the main effects were replicated although the task was more difficult in the tablet set-up. After establishing the tablet’s validity, we analyzed the response patterns of OA that were less fit than the OA in previous research. We identified three different illusion patterns: a (large) illusion effect (indicating over integration), a reverse illusion effect, and no illusion effect. These different patterns are much more nuanced than previously reported for fit OA who only show over integration. We propose that the patterns are caused by differences in the samples of OA (OA in the current sample were older and had lower ADL scores), possibly modulated by increased task difficulty in the tablet setup. We discuss the effects of illusory background motion as a function of ADL scores using a transitional model. The first pattern commences when sensory integration capability starts to decrease, leading to a pattern of over-integration (illusion effect). The second pattern commences when compensatory mechanisms are not sufficient to counteract the effect of the background motion, leading to direction errors in the same direction as the background motion (reverse illusion). The third pattern commences when the task requirements are too high, leading OA to implement a probabilistic strategy by tapping toward the center of the screen. INTRODUCTION The development of new health technologies and advancements in medicine have helped extending life expectancy of the global population (Dey, 2017). This increased life expectancy has led to a rise of the population of 60 years old and older, considered as older adults (OA) by the World Health Organization (Dey, 2017). In order to help solve future societal challenges and decrease for instance health care costs, age-related changes in physical abilities and functioning need to be studied. Assisting OA to live independently for a longer time can help to reduce the age-related costs for the society and to increase their quality of life. In order to live independently, an individual needs to be able to perform both the basic (Katz et al., 1963) and instrumental (Lawton and Brody, 1969) activities of daily living (ADL), such as dressing and shopping. To do so, OA need a good level of mobility (Lowry et al., 2012) and need to be able to properly integrate the sensory information from the surroundings (Lowry et al., 2012). Sensory integration is known to change during the lifespan (Mozolic et al., 2012;Freiherr et al., 2013;de Dieuleveult et al., 2017). In a recent review (de Dieuleveult et al., 2017), we concluded that OA seem to integrate as much information as possible in their surroundings without properly weighing the information, thus also using irrelevant or unreliable information (Berard et al., 2012;Brodoehl et al., 2015;de Dieuleveult et al., 2017). Also, the addition of a dual task was shown to decrease the performance of OA to a greater level than that of YA indicating that OA seem to be less able to compensate for the increased attentional and sensory demand (Redfern et al., 2001;Mahboobin et al., 2007;Redfern et al., 2009;Bisson et al., 2014;de Dieuleveult et al., 2017). Vision is essential for goal-directed reach movements toward moving targets (Brouwer et al., 2002(Brouwer et al., , 2003Kavcic et al., 2011;Brenner and Smeets, 2015). There are two ways to collect information to judge a target's direction of motion. Both require the integration of multisensory information. The first way is by using the actual motion of the target in space (visual information), and the information about changes in the eye's orientation in the head and changes in the orientation of the head (proprioceptive information) (Nakayama, 1985;Schweigart et al., 2003). The second way is by assuming that the surrounding remains static and use the relative motion of the target's retinal image and its surrounding to estimate the target's motion in space (visual and vestibular information). The background is then interpreted as optic flow due to the participant's own motion (Brenner and van den Berg, 1994). If the background is moving, relying on relative motion will lead to systematic errors. In this case, refraining from relying on relative motion would be beneficial to increase precision. In an earlier study, Berard et al. (2012) found that old age affects the ability to down-regulate the influence of such visual information in a walking task. In an original experiment (de Dieuleveult et al., 2018), we investigate whether this effect generalizes to another paradigm, namely the paradigm of Brouwer et al. (2003) that used the Duncker illusion (or induced motion) (Duncker, 1929). The object appears to move differently due to movement in its surrounding (Soechting et al., 2001;Zivotofsky, 2004). We showed that OA were more affected by the illusion of motion created by a horizontally moving background than YA when trying to intercept disappearing targets on a large screen. OA's interception points (called taps) were more deviated to the left for a right background motion and to the right for a left background motion than those of YA. These results could reflect a reduced ability to ignore or downregulate irrelevant sensory information and/or a greater reliance on vision because of unreliable somatosensory and proprioceptive systems (de Dieuleveult et al., 2018). If the ability to ignore or downregulate irrelevant sensory information is indeed reduced in OA, our interception task may be a useful tool to diagnose sensory integration problems that could underlie (future) problems in ADL. Aging-related decline in perception and multisensory integration has been studied before, but the relation between this decline and ADL performance and its possible interaction with dual tasks is less well researched. If a reliable relation can be shown, the measure of multisensory integration may be used as an early diagnostic and/or predictive tool for ADL problems in individual OA. Such a tool is clinically relevant since decline in ADL can be slowed down or prevented using different exercise approaches targeting specific ADL problems. These approaches are already available and commonly used in clinical practice, for example strength training (Hazell et al., 2007), functional training (Liu et al., 2014), and balance training (Bellomo et al., 2009). Training of multisensory integration in the OA population has been developed as well (de Vreede et al., 2005;Setti et al., 2014;Merriman et al., 2015). However, no diagnostic tool assessing multisensory integration deficits related to the ADL exists (de Dieuleveult et al., 2017). Consequently, existing training approaches are blind to potential sources of the deficit, and training cannot be specifically tailored to individual OA's multisensory integration issues during daily life. Self-report ADL questionnaires such as the NEADL can suffer from bias in memory and social demands and are not suited to find the cause of the ADL problem. The interception task may be able to do so in a direct and fun way, assessing a person's sensory integration functioning by means of performance rather than self-report. It could be a good starting point to develop a clinically useful diagnostic toolkit of multisensory integration issues related to ADL. This could help clinicians to better tailor training programs to the individual's underlying problems. An important step toward the development of a clinical tool is to develop and validate a mobile set-up of the task, that is practical and affordable and can be used for OA with different degrees of ADL performance. The current interception task was adjusted from its original large screen (49 inch) set-up to a smaller tablet (12,3 inch) setup in order to increase clinical applicability: a smaller setup is more convenient for clinical practice and could also be transported to measure individuals that are less fit and cannot easily travel to a clinician. The experiment reported here was roughly the same as in de Dieuleveult et al. (2018). YA and OA intercepted disappearing targets moving downwards on the tablet with a horizontally moving background. The interception task was performed either with or without one of two secondary tasks. The secondary tasks were chosen to impinge on different processes that are expected to relate to sensory perception and integration. The first one mainly disturbed proprioception, which is highly involved in postural stability. Postural stability is necessary for motor control, coordination and steadiness during the performance of ADL (Ghai et al., 2017). The second kind of dual task was mainly cognitive. Cognitive deficits have been shown to interfere with motor performance, tasks that were automatic (such as walking) require more cognitive attention with increasing age (Lima et al., 2015). OA are more prone to falls when they are trying to walk and perform a second task at the same time A larger background effect when adding dual tasks might reveal compensatory mechanisms that normally help to decrease deficits caused by the earlier reported over integration in OA. Additionally, the use of dual tasking in training programs has been shown to improve postural stability, notably for OA (Ghai et al., 2017). First, this experiment aims at investigating whether we can replicate the results found in our original experiment (de Dieuleveult et al., 2018) with a small, portable tablet instead of a large screen. Therefore, we test a sample of YA from the same population as in the original experiment, and we hypothesize that the effect of the moving background would be similar for YA in both experiments (H1). The second aim was to investigate task performance of OA with problems to perform ADL. Previously, the sample of OA was relatively homogeneous with high ADL scores. Therefore, we recruited OA from a fall prevention project with a varying level of physical functioning (varying scores on clinical ADL-related tests). We hypothesize that this group of OA with ADL issues would be highly influenced by the background motion (H2). Finally, we wanted to investigate the influence of dual tasks on the performance of OA with ADL issues. In our previous work (de Dieuleveult et al., 2018), the two additional tasks did not influence the illusion effect of YA and OA without ADL issues. We hypothesize that OA with ADL issues would be more influenced by the additional tasks as compared to YA (H3) and that OA with ADL issues would show differences in illusion effect between single and dual task conditions that could be associated with a decline in compensatory mechanisms that are still effective in OA without ADL problems (H4). Participants Twenty-four OA (70-88 years old, mean age 75.9 ± 4.65 years, 14 women) and nineteen YA (20-32 years old, mean age 25.6 ± 3.52 years, 12 women) participated in the study. Younger participants were recruited from the TNO participant pool (Soesterberg, Netherlands) and older participants were recruited from a fall prevention program in the health center Marne (Amstelveen, Netherlands). Participants self-reported being right-handed and having normal or corrected-to-normal vision (participants were asked to use their normal vision aids if needed). Participants' hearing was checked by the examiner by asking them whether they could distinguish a high from a low tone used later in the experiment. Participants self-reported not to have vestibular or balance dysfunction, psychiatric symptoms, or musculoskeletal or neurological problems. They self-reported to be in relatively good health during the 2 weeks prior to the experiment and on the day of the experiment. The Mini Mental State Examination (MMSE) was used as a screening test for cognitive impairments. A cut off score of 24 or lower was used for exclusion (Dick et al., 1984). Task The interception task was the same as the task described in de Dieuleveult et al. (2018). In the baseline condition, participants sat on a chair. The participants were asked to intercept, as quickly and as accurately as possible, moving targets that had disappeared after 150 ms, with the tip of their right index finger. If participants hit the target correctly, it reappeared on the screen in green and stayed still at the position of the hit. If they missed the target, it reappeared in red on the screen and moved in the opposite direction of the error. For instance, if the participant hit a position on the screen that was below the actual position of the target and too much to the left, the target moved upwards to the right. This feedback informed participants about their performance in the task and might help them learn to ignore the motion of the background. If they did not tap at all, the trial was counted as a no tap trial and the participant had to put his index finger again on the home button in order to start a new trial. The balance and counting conditions included a secondary task. In the balance condition, participants were to keep balance, standing on a block of foam rather than sitting (see below in the stimuli and materials section for more information on the foam). In the counting condition, participants were sitting, as in the baseline, but had to count the number of high and low tones that they heard during the experiment. Stimuli and Materials In contrast to the original study (de Dieuleveult et al., 2018) the interception task was now performed on a 12,3 inch tablet rather than a 49 inch screen. Parameters were scaled down from the original experiment to best fit the tablet's screen (see details in Table 1). Furthermore, in this experiment, the target was moving in three rather than five different directions (see below in the Stimuli and Materials section for further details). During the experiment, the stimuli were presented on a 12,3inch tablet (Microsoft Surface, size: 29,21 × 20,14 cm, resolution: 2736 × 1824 pixels) positioned on a table in a vertical orientation but tilted backward by 30 • . The eyes of the participants were at a distance of approximately 60cm from the screen during the experiment (so 1cm is about one degree of visual angle). For the balance condition, where participants were standing instead of sitting, the height of the tablet was increased to keep the height of the shoulders relative to the tablet approximately the same across conditions. To start each trial, participants had to place their index finger on the home position, a green disc with a diameter of 1.6 cm situated 6 cm below the center of the screen [i.e., at coordinates (0,-6); see Figure 1 for an overview of the stimulus lay-out]. After a random time between 600 and 1200 ms, the target, a black disc with a diameter of two centimeters, appeared 8.65 cm above the center of the screen (0, 8.65) and moved toward the bottom of the screen. The target moved with a vertical velocity of 12 cm/s and one of three different horizontal velocities (−7.2, 0, or 7.2 cm/s). The three different direction of target motion are referred to as "S" for targets moving straight downwards, "L" for targets deviating to the left, and "R" for targets deviating to the right. The targets were visible for 150 ms and then disappeared. The disappearing points relative to the center of the screen (0,0) were: (−0.51, 7.05), (0.00, 7.05), and (0.51, 7.05). The targets and the home position were presented on a full screen background of white and blue squares (2 cm sides) that formed a checkerboard. The background started to move at 9.6 cm/s to the right or the left as soon as the target appeared. Auditory stimuli were presented to the participants by a computer through speakers situated behind the tablet. The computer presented sequences of high (1 kHz for 500 ms, 40% of the tones) and low tones (250 Hz for 100 ms, 60% of the tones). The intervals between the tones were randomly generated between 2 and 6 s. The tones were present during the three conditions of the interception task, but participants only had to pay attention to them in the counting condition. The block of foam used in one of the pretests and in the balance condition had a length and width of 40 cm, a height with no load of 15 cm, a height of about 10 cm when compressed by the weight of a participant, and a density of 35 kg/m 3 . Procedure The session started with four standardized clinical tests used to estimate the mental and physical functioning of participants. The Mini-Mental State Examination (MMSE) (Dick et al., 1984), a 30 points questionnaire comprising short questions and simple tasks, was used to screen for cognitive impairments (cut off score of 24). The modified-Clinical Test of Sensory Interaction and Balance (m-CTSIB) (Shumway-Cook and Horak, 1986;Horn et al., 2015), a performance test in which participants have to stand on a firm and a foam surface with their eyes open or closed for 30 s each, was used to test sensory integration and balance deficits. The block of foam mentioned in the Stimuli and Materials section was used to perturb balance in the m-CTSIB. This test was also used to assess if the participants were fit enough to perform the balance condition safely. In order to participate in this condition, they had to be able to stand on the foam with their eyes open for 30 s without losing balance. The Short Physical Performance Battery (SPPB) (Guralnik et al., 1994;Pavasini et al., 2016) was used to assess lower limb physical functioning. It includes balance tests (side-by-side stand, semi tandem stand and tandem stand), chair stand tests and gait speed test (the ten meters gait speed test was used instead of the threeor four-meters tests). Finally, the Nottingham Extended ADL scale (NEADL) (Nouri and Lincoln, 1987) was used to assess difficulties in performing the ADL. This test is more precise than the Instrumental ADL scale (Lawton and Brody, 1969) used in the original experiment (de Dieuleveult et al., 2018). This questionnaire includes 22 questions separated in four categories, mobility (six items), kitchen (five items), domestic (five items), and leisure (six items) activities. Items can be rated from 0 (not capable of performing this activity) to 3 (easy to perform) as compared to eight items rated 0 (not capable of performing this activity) to 1 (capable of performing this activity) in the Lawton and Brody scale. The test on the tablet was preceded by a presentation of standardized instructions by means of a PowerPoint presentation explaining the entire task and procedure to the participants. The examiner ensured that participants were able to hear the tones before starting the experiment. Participants first performed a practice session in the sitting position consisting of six trials with the target remaining visible (two trials for each of the three possible directions of target motion in randomized order) and randomized trials with the target disappearing after 150 ms until participants felt comfortable with the task. When the participants indicated that they understood the task, the experiment started. The three experimental conditions (baseline, balance, counting) were presented in random order. Each condition was presented in a block consisting of 57 trials: three practice trials without any background motion at the start (one trial per direction of target motion, presented in random order), followed by 54 experimental trials. In the 54 experimental trials, the background moved to the left in 27 trials (nine trials per direction of target motion) and the background moved to the right in the other 27 (nine trials per direction of target motion). The three blocks were separated by short breaks. Participants could take breaks at any time during the experiment. Participants were asked to report the number of high and the number of low tones they heard during the counting block at the end of it so that the examiner had an impression as to whether they adhered to the secondary task. Participants were considering adhering if they gave a plausible answer even if it was not the correct one. The data of the participants, who reported that they did not count or gave an answer very far from the correct one, were excluded from the data analyses for the counting condition. Replication of the Original Experiment on a Tablet With YA The analysis of the replication of the original experiment on a tablet will be done only in the YA group because they were recruited from the same population enabling the comparison between the two studies. The participating YA were recruited through the same participant pool as in the original experiment. Direction Error and Illusion Effect The direction error is defined as the angle in degrees between the direction that the target was moving in and the direction that the participant considered it to be moving after disappearing according to the position of tap (see Figure 2). When the tapping position was to the left compared to the actual position of the target at tap (as shown in the example given in Figure 2), the direction errors were assigned a negative value; if it was on the right, the direction error were assigned a positive value. The mean direction error was calculated for each participant, experimental condition and direction of background motion. The direction error is our main dependent variable. The illusion effect is defined as the direction error for left background motion effect minus the direction error for the right background. Hit, Miss, and "No Tap" Trials Trials were considered to be hit trials if the participant's finger hit the screen within 2cm from the center of the target. If the participant's hit exceeded the 2 cm range, the trial was considered a miss trial. Trials in which participants did not hit the screen at all were considered "no tap" trials. Average Time to Tap The time to tap is the time between the appearance of the target and the moment that the participant's finger hit the screen. Statistical Analyses The MATLAB functions qqplot and vartest2 were used to assess the normality of distribution of the residuals and the equality of the variances between the groups and conditions. With normal distributions and variances, one-way ANOVAs and paired t-tests were used to evaluate interaction effects of the different independent variables. With non-normal distributions and/or variances, non-parametric tests were used: the Mann-Whitney U-test for differences due to age (unpaired samples) and the Wilcoxon signed-rank test for differences due to conditions (paired samples). Effects are considered to be significant if p < 0.05. The chi-square test was used to test whether groups of participants differed with respect to gender. The analysis of the replication of the original experiment on a tablet (H1) will be done only in the YA group. These participants were recruited from the same population and through the same participant pool as in the original experiment enabling the comparison between the two studies. To investigate task performance of OA with problems to perform ADL (H2), the performance of this population in the task will be compared to the performance of (relatively fit) OA of the original experiment. To investigate the influence of dual tasks on the performance of OA with ADL issues, the performance of this population in the task will be compared to YA (H3). We also looked at the different responses that OA with ADL issues and (relatively fit) OA may have in the task (H4). To answer these four hypotheses, we tested for effects of direction of background motion (left and right), experimental condition (baseline, counting and balance) on the interception task variables between the groups. We compared whether the effects in the balance condition were different than in the baseline condition and whether the effects in the counting condition were different than in the baseline condition. RESULTS YA (n = 19) performed all the pretests and conditions of the experiment. The majority of the OA (n = 15) were able to perform all conditions of the experiment (baseline, balance and counting). Five participants were unable to stand on the foam with eyes open for 30 s in the m-CTISB and therefore were not asked to perform the balance condition. Three of these five participants also failed at counting the tones in the counting condition. Four additional OA did not count the tones in the counting condition but did perform the m-CTSIB properly. The data of the seven participants that did not properly perform the counting task (all of them OA) were not included for the analysis of the counting condition results. Additionally, two OA were excluded from the analyses because their performance on the interception task was very different from the other participants (high number of no tap trials or long times to tap). One of these two participants performed only the baseline condition, the other one performed the baseline and counting condition. In total, 19 YA were included for the analyses of the data for the three experimental condition, 22 OA in the baseline condition, 19 OA in the balance condition and 16 OA in the counting condition. Replication of the Original Experiment on a Tablet in YA (Hypothesis 1) The group of YA in the current experiment was not different from the group of YA of the original one in terms of age Table 2 shows the comparison between the lab setup of the original experiment to the mobile setup of the current experiment for the different performance measures, for YA. Figure 3 shows an effect of the background motion in YA with direction errors more positive for a left background motion and more negative for a right background motion in all three conditions (baseline, balance and counting). This effect of the background motion is larger than the one we found in the original experiment (Student's paired T-Tests: all p < 0.01). These results were not different between the three conditions for YA (Student's paired T-Tests: all p < 0.01). The main performance patterns (i.e., the presence and direction of the illusion effect and the direction error) are similar in both experiments, but the tendency to hit toward the center, the lower percentage of hits and longer times to tap in baseline and balance condition (Table 2) suggest that task difficulty increased in the tablet set-up compared to the original laboratory setup. Performance of OA With ADL Problems in the Baseline Condition as Compared to (Relatively Fit) OA Measured in the Original Experiment (Hypothesis 2) As expected and intended, the group of OA was different from the group used in the original experiment, with OA in the current Measure Effect p-value of Mann-Whitney U-tests (one for each of the three conditions) Illusion effect Replicated. The direction of the Illusion effect is the same. However its size is significantly larger as compared to the original experiment. All p < 0.001 Target direction effect Replicated. The target direction effect is similar to that of the original experiment: participants tapped following the target direction of motion. However, the current experiment showed a (non-significant) tendency of participants to tap toward the center of the screen that was not found in the original experiment. All p < 0.001 Percentage of hits Not replicated. The average percentage of hits in the current experiment was significantly lower as compared to the original experiment Figure 4 shows the direction errors of the OA' taps according to the experimental condition and background direction of motion for the original and current experiments. For H2 we focus on the results of OA. The figure suggests a reverse effect of the background for the OA in the baseline condition, however, the background motion did not affect the direction errors significantly (Z = −0.146, p = 0.884). These results are different than the ones found for (relatively fit) OA in the original experiment in which OA had a large illusion effect caused by the background motion, larger than the effect observed for YA. The absence of an illusion effect in OA found in the current experiment was not in accordance with our hypothesis (H2). The variability in the direction errors of OA is relatively large, which matches the impression of the leader of the experiment, who observed that some participants seemed to tap following the background motion rather than the target's direction of motion, and that other participants seemed to tap continuously toward the center of the screen instead of following the target's direction of motion. These patterns may cancel each other and result in the lack of an overall effect. Therefore, we decided to look at the illusion effect for individual participants. The results are presented in Figure 4. Figure 5 shows the illusion effect for each participant in each experimental condition. In the baseline condition, YA tended to have a positive illusion effect, meaning that they tapped more to the left for a right background motion and more to the left for a right background motion. This is in accordance with results found in the original experiment and in the literature (de Dieuleveult et al., 2018). The group of OA showed larger variability. In the baseline condition, three different patterns of OA could be observed with at least 6 participants in each: participants with an illusion effect similar but larger to the one found in YA ("over integration" pattern), participants with no illusion effect ("minimal use of visual information" pattern), and participants with a reverse illusion effect who tapped more to the right for a right background motion and more to the left for a left background motion ("dragged by the background" pattern). Illusion Effect These results are different from the results of the original experiment for OA. In the baseline condition of the original experiment, only one OA showed a reverse effect of the illusion ("dragged by the background" pattern). All the other OA showed either a positive illusion effect ("over integration" pattern) or no effect of the illusion ("minimal use of visual information" pattern). Performance of OA With the Three Patterns in the ADL-Related Pretests and in the Interception Task in the Baseline Condition The results above showed three different patterns of tapping in the OA of the current experiment ("over integration" pattern, "minimal use of visual information" pattern, and "dragged by the background" pattern). The ADL-related pretests results were not significantly different between these three patterns of OA in the baseline condition (Mann-Whitney U-test: all p > 0.05). However, the "over integration" pattern of OA had less no tap trials than the "dragged by the background" pattern (W = 42, p = 0.029) and had longer times to tap as compared to the "minimal use of visual information" pattern (W = 80, p = 0.031) in the baseline condition. The other results of the interception task were not significantly different between the three patterns of OA in the baseline condition. Figure 6 shows a higher percentage of hits for YA compared to OA in the baseline condition (U = 209, p < 0.001). This difference between YA and OA followed the same trend as the one found in the original experiment. Percentage of Hits, Number of No Tap Trials and Time to Tap OA had a higher number of no tap trials compared to YA in the baseline (U = 209, p < 0.001). This difference between YA and OA followed the same trend as the one found in the original experiment. The time participants took to hit the screen was different between the age groups in the baseline condition with OA being slower than YA (U = 209, p = 0.043). This trend was different than the results of the original experiment that showed no difference between YA and OA in terms of times to tap (de Dieuleveult et al., 2018). Influence of Dual Tasks on Interception Task Performance for YA and OA With ADL Problems (Hypothesis 3) Illusion Effect As seen in Figure 4, the overall illusion effect was not different between the three conditions for OA (Wilcoxon signed-rank test on the illusion effect: all p > 0.05). The illusion effect was significantly smaller for OA as compared to YA in the balance and counting conditions (respectively, U = 180.5, p = 0.007 and U = 152, p = 0.006). This did not reach significance in the baseline condition (U = 209, p = 0.078). The introduction of additional tasks had an effect on the number of OA showing the three illusion effect patterns ("over integration" pattern, "dragged by the background" pattern, and "minimal use of visual information" pattern). For this analysis, only the data of OA that performed the three experimental conditions were taken into account to compare the changes of patterns in the participants. In the balance condition, the percentage of participants with the "minimal use of visual information" pattern was larger than the one for the baseline condition (73.3% as opposed to 53.3%) decreasing the percentage of participants with the "over integration" pattern as compared to the baseline condition (13.3% as opposed to 33.3%). The percentage of participants with the "dragged by the background" pattern was the same between the baseline and balance conditions (13.3%). In the counting condition, all of the OA showed the "minimal use of visual information" pattern except one participant that had a reverse illusion effect. This condition decreased the percentages of participants with the "dragged by the background" pattern (6.7%) and with the "over integration" pattern (0%) and increased the percentage of participants with the "minimal use of visual information" pattern (93.3%) as compared to the baseline and balance conditions. Figure 6 shows a higher percentage of hits for YA compared to OA in each condition (Mann-Whitney U-test: all p < 0.001). This graph also shows that OA hit less targets in the counting condition compared to the baseline (W = 89, p = 0.020). Number of No Tap Trials OA had a higher number of no tap trials compared to YA in all three conditions (Mann-Whitney U-test: all p < 0.001). For both groups the number of no tap trials was higher for the counting condition compared to the baseline condition (YA: W = 0, p = 0.03; OA: W = 7.5, p = 0.005). Time to Tap The time participants took to hit the screen was different between the age groups in the baseline and balance condition with OA being slower than YA (baseline: U = 209, p = 0.043; balance: U = 180.5, p = 0.018). This was not observed in the counting condition (U = 152, p = 0.253). The times participants took to hit the screen were not different between the baseline condition and the two other experimental conditions in either of the age groups (Wilcoxon signed-rank test: p > 0.05). Illusion Effect We observed that the way the types of illusion effects are distributed across participants depends on the condition in OA in the current experiment (right of Figure 5), while it was virtually constant across conditions for the (relatively fit) OA in the original experiment (Baseline: 47.4% "over integration" pattern, 47.4% "minimal use of visual information" pattern, 5.2% (one participant) "dragged by the background" pattern; Balance: 47.4% "over integration" pattern, 52.6% "minimal use of visual information" pattern). The increased cognitive demand of the counting task had the same effect in the two experiments increasing slightly the number of participants with the "minimal use of visual information" pattern and decreasing slightly the number of participants with the "over integration" pattern compared to the baseline (Counting: 31.6% "over integration" pattern, 68.4% "minimal use of visual information" pattern, 0% "dragged by the background" pattern). Percentage of Hits, Number of No Tap Trials and Time to Tap OA in both experiments hit less targets in the counting condition as compared to the baseline condition. OA in both experiments had more no tap trials in the counting condition compared to the baseline condition. The times to tap for OA in the current experiment were not different between the conditions. These results are different from what we found in the original experiment where OA hit the screen faster in the balance condition and slower in the counting condition compared to the baseline condition. DISCUSSION This study first aimed at investigating whether we can replicate the results found in an earlier experiment in which bulky labgrade equipment was used with a mobile setup in order to develop a diagnostic tool of sensory integration issues for OA. Second, this study aimed at investigating the differences between healthy YA and OA with varying levels of physical functioning in the interception task. Replication of the Original Experiment on a Tablet in YA (Hypothesis 1) Our first hypothesis was that the effect of the moving background would be similar in both experiments (H1). As Table 2 indicates, the main effects were replicated, but there were also differences between both experiments: taps deviated to the center of the screen, the percentage of hits was lower, and the times to tap the screen were longer in two of the three conditions as compared to the original experiment. Differences in task difficulty may have caused these dissimilarities. By scaling down the setup to the tablet size, three adjustments were necessary that likely increased the general task difficulty: target size, trajectory visibility, and target speed as discussed below. We argue that the increased task difficulty is the underlying cause of all above mentioned effects. The size of the target was significantly smaller in the current experiment compared to the original one with a diameter of two centimeters (2 • of visual angle) instead of six centimeters (6 • of visual angle) while the distance between the eyes and the screen was about the same (see Table 1). This may have increased the difficulty of the overall task for all participants. Indeed, it is known that a decrease in target size negatively impacts its detection (Kincaid et al., 1960;Johnson et al., 1978). Additionally, the size of the target was larger in proportion to its visible traveled trajectory. In the original experiment, the five different direction of target motion were easily distinguishable. In the current experiment, the deviations were smaller and, with the addition of the target being proportionally larger, it might have been harder to distinguish the three directions of target motion because of larger overlapping in trajectories at the beginning of the path. Another difference that may have been essential was the changed speed of the target; 12 cm/s in the current as opposed to 50 cm/s in the original experiment (de Dieuleveult et al., 2018). The larger illusion effect with slower targets is in accordance with results found by Brenner and Smeets (2015) who showed that the influence of the moving background was larger for slower targets (Brenner and Smeets, 2015). The exact cause of this effect has to be examined in order to understand why the influence of the moving background is smaller when the target or, associated to that, the hand is moving faster. Apart from differences in task difficulty, we cannot exclude that other, unknown differences between the two experiments have played a role, such as environmental and demographical differences (besides gender and age which we checked for). Performance of OA With ADL Problems in the Baseline Condition as Compared to (Relatively Fit) OA Measured in the Original Experiment (Hypothesis 2) We hypothesized that the OA in the current experiment (with ADL problems) would be more influenced by the background motion. In contrast to what we hypothesized, the study shows that the OA in the current study showed three different patterns of illusion effect: a (large) illusion effect ("over integration" pattern), a reverse illusion effect ("dragged by the background" pattern) and no illusion effect ("minimal use of visual information" pattern). The "dragged by the background" pattern was not present in the (relatively fit) OA of the original experiment (only one participant in the baseline condition). These effects may be caused by the fact that OA in the current study were less fit and therefore may have had more difficulties to properly perform the task, i.e., the effects are caused by differences in the samples of OA. In addition, task difficulty may have modulated the effects of age for specific samples. Therefore, both factors (increased task difficulty and OA group differences) are elaborated upon below. Increased Task Difficulty for OA With ADL Issues Compared to (Relatively Fit) OA Generally, performance of both YA and OA suggest that the task is more difficult in the current setup [see section Replication of the Original Experiment on a Tablet in YA (Hypothesis 1) that explain the increased task difficulty]. With advancing age, sensorimotor performance is altered. OA have, for example, difficulties in coordination, an increased variability of movement, and slower movements (Contreras-Vidal et al., 1998;Seidler et al., 2010;Lee et al., 2013). As a consequence, with age, OA are less and less able to perform accurate movements. The smaller target of the current experiment (two centimeters instead of six centimeters) required participants to be more accurate when intercepting the target as compared to the original one. These changes might have increased the difficulty of the task for OA with ADL issues compared to OA without ADL issues. As described by Hubel and Wiesel (1962), perception of objects and motion is driven by feed forward connections between the lateral geniculate nucleus (LGN), layer IV simple cells and layer II/III complex cells leading to orientation and direction-selective receptive fields in the primary visual cortex neurons (V1) (Hubel and Wiesel, 1962;Alitto and Dan, 2010). Gamma-aminobutyric acid (GABA) mediated inhibition is necessary for neuronal responsiveness and selectivity in V1 in order to suppress neuronal responses for irrelevant stimuli (Alitto and Dan, 2010). These excitation/inhibition processes have been shown to be less effective in OA (Leventhal et al., 2003;Francois et al., 2011;McDougall et al., 2018). This may underlie our original finding (de Dieuleveult et al., 2018) that OA are less able to ignore the (irrelevant) background motion in the interception task. McDougall et al. recently discovered that the receptive fields of V1 neurons in OA could expand due to changes in GABAergic functioning leading to a summation enhancement of the neurons responses (McDougall et al., 2018). As a consequence, signals from different small moving stimuli (the target and the squares of the background in the interception task) are pooled together, leading to erroneous percept and additional motion noise (McDougall et al., 2018). The target and the squares of the background being smaller in the current experiment as compared to the original one, might have increased this summation enhancement issue and increased the difficulty of OA to dissociate these two stimuli. Age-related decline in motion sensitivity is also known to be particularly strong for central vision as compared to peripheral vision (Wojciechowski et al., 1995;Owsley, 2011). The target in the current experiment was smaller than the one in the original experiment (2 • of visual angle instead of 6 • ). OA had to rely more on their (compromised) central vision to distinguish the target from the background in the present study, thus increasing their tendency to tap in the center of the screen as a more probabilistic strategy, reducing the illusion effect and decreasing their percentage of hits. OA Group Differences Differences between the two groups of OA might add up to the increased task difficulty as a cause of differences in the task performance. Compared to the original experiment, the OA in this experiment were selected such as to also include individuals who would score below ceiling level on the conventional ADL-related tests. They were also on average older than the participants of the original experiment (mean age: 75.9 ± 4.65 years compared to 67 ± 6.40 years). Thus, the degradations described earlier (motion discrimination problems), as well as other age related decline such as increased variability of movements, slowing movements, coordination problems and balance problems (Seidler et al., 2010;Lee et al., 2013), may have become more noticeable and lead to different results between the current compared to the original experiment. The combined results of the original experiment and the current one, lead us to propose a transitional model of the aging process happening in our task (model depicted in Figure 7). In this model, YA are depicted first. This group of participants puts a high weight on the target and small or no weight on the background, resulting in a small illusion effect caused by the moving background. In other words: YA are able to largely (but not completely) ignore the task irrelevant background motion. The original experiment showed an increased illusion effect for OA as compared to YA. The group of OA that participated in that experiment did not have any problem to perform the ADL. Similar results were found for some of the OA in the current experiment. In our previous article, we hypothesized that the strong illusion effect was caused by OA having more trouble to ignore irrelevant sensory information in general (Teasdale et al., 1991;Berard et al., 2012;Eikema et al., 2014;McGovern et al., 2014;de Dieuleveult et al., 2017). With age, proprioceptive and vestibular information become less reliable and may force OA to rely more on vision, they put more weight on the visual cues, leading to an over integration of the background motion and larger direction error (Brenner and van den Berg, 1994;de Dieuleveult et al., 2018). Therefore, this early stage of the aging-related degradation process might be the cause of a first transition in performance patterns (transition 1 in Figure 7) with participants over-integrating the background motion as compared to YA. This is in line with previously found results of the literature that showed that OA tend to integrate all the information present in the environment (Teasdale et al., 1991;Berard et al., 2012;Eikema et al., 2014;McGovern et al., 2014;de Dieuleveult et al., 2017). The OA in the current experiment had on average more problems to perform the ADL as compared to OA of the original experiment. They had lower scores in the pretests as well, were older, hit less targets and were slower. Some of the OA in the current experiment had similar results as the ones that participated in the original experiment ("over integration" pattern). However, some had a reverse illusion effect (tapped more to the left for a left background motion and more to the right for a right background motion, "dragged by the background" pattern) and some did not show any effect of the illusion ("minimal use of visual information" pattern). In the second pattern described in this experiment ("dragged by the background" pattern), the OA showed a reverse illusion effect as compared to the "over integration" pattern. This effect was not observed in the original experiment. This could reveal a second pattern of the aging process happening in our task, the age-related degradations in the sensory systems might become larger and hinder participants from downregulating the taskirrelevant, but relatively salient, background. Participants put more and more weight on the background to the extent that the background outweighs the target. They start then to be dragged by the background motion that they cannot ignore and tap following this background instead of following the target's direction of motion. These results are supported by the fact that the OA with the "dragged by the background" pattern had more no tap trials than the ones with the "over integration" pattern, showing that the task was more difficult for them. For the last pattern ("minimal use of visual information" pattern), the illusion effect found in the original experiment was not present since these participants tapped toward the center of the screen instead of following the target or background direction of motion. These results suggest that when the task becomes too difficult, a third transition in patterns and therewith performance occur in OA (transition 3 in Figure 7). Participants don't try to intercept the targets anymore. The interception task, which required sensory integration, became a simple RT task with participants taping in the center of the screen in a more probabilistic strategy. These results are supported by the fact that the participants with the "minimal use of visual information" pattern had shorter times to tap as compared to the "over integration" pattern, showing that they tend to tap as fast as possible. This is in line with findings of the literature that showed that OA present a slowing of the integrative systems and difficulties to integrate sensory information into the central nervous system (Stelmach et al., 1989;Teasdale et al., 1991;Furman et al., 2003;Diederich et al., 2008;Hugenschmidt et al., 2009;Stephen et al., 2010;Elliott et al., 2011;Mahoney et al., 2011;Dobreva et al., 2012;Mahoney et al., 2012;Mozolic et al., 2012;Wu et al., 2012;DeLoss et al., 2013;Guerreiro et al., 2014Guerreiro et al., , 2015. This transition to a different (easier) pattern, driven by task difficulty, might be a good indicator for ADL performance. Additionally, the OA of the current experiment were older than the ones of the original experiment, and thus might have been more affected by age-related movement deficits. Indeed, with age, OA experience a decline in sensorimotor control and functioning which can be attributed to changes in the central and peripheral nervous systems and the neuromuscular system (Seidler et al., 2010). These changes lead to motor performance deficits often associated with a diminished ADL performance (Seidler et al., 2010). These deficits might be an additional cause of difficulties to reach the screen accurately for these participants as compared to the participants of the original experiment and might add to the causes of this change in pattern. We did not find any differences in the performance of the ADL-related pretests for the three patterns of illusion. This is not in accordance with the expected decrease in performance across the patterns. These results might be explained by the small number of participants with each of the three patterns. Additionally, as opposed to the interception task, the ADLrelated pretests do not focus on vision and upper limb movements, which can explain why we found no relation between the results of the tasks. Influence of Dual Tasks on Interception Task Performance for YA and OA With ADL Problems (Hypothesis 3) We hypothesized that OA with ADL issues would be more influenced by the additional tasks as compared to YA (H3). FIGURE 7 | Graph representing the three hypothesized age-related transitions in patterns happening in the interception task. In the early phase of age-related changes (transition 1), there is a tendency to integrate all available information with a lack of proper weighting less reliable information ("over integration" pattern). In a second phase of the age-related process, OA become unable to downregulate the task-irrelevant background and are dragged by it, showing a reverse effect of the illusion (transition 2) ("dragged by the background" pattern). In a later phase of age-related changes, multisensory integration becomes too difficult, and older adults change their pattern again (transition 3), basically turning the interception task into Reaction Time task (" minimal use of visual information" pattern). The results of the experiment did not confirm this hypothesis. Indeed, we presented two dual tasks in this experiment, one that perturbed mainly the proprioceptive and vestibular inputs (balance condition) and one that perturbed mostly cognition (counting condition). The results of Figures 4, 5 showed that these additional tasks have a negative impact on the performance of both YA and OA but did not show a larger effect of the dual tasks in OA compared to YA. This is in accordance with previously reported results showing that dual tasks decrease task performance (Redfern et al., 2001(Redfern et al., , 2009Mahboobin et al., 2007;Bisson et al., 2014;de Dieuleveult et al., 2017). Standing on the foam in the balance condition influenced the illusion effect in YA and OA. Indeed, this additional task increased the number of participants that tapped toward the center of the screen without following the actual direction of the target motion in both age groups as compared to the baseline condition. However, the percentages of hits, numbers of no tap trials and times to tap were not different between the balance and baseline conditions for both age groups. The counting condition seemed to be more challenging for both age groups. In this condition, none of the OA had an effect of the background motion except one participant that had a reverse illusion effect. None of these participants had an "over integration" illusion effect. Additionally, both age groups had a larger amount of no tap trials in the counting condition as compared to the baseline condition and the OA intercepted less targets in the counting condition than in the baseline condition. Influence of Dual Tasks on Interception Task Performance for (Relatively Fit) OA Measured in the Original Experiment and OA With ADL Problems (Hypothesis 4) We hypothesized that OA with ADL issues would show differences in illusion effect between dual and single task conditions as opposed to OA without ADL problems (H4). These differences could be associated with a decline in compensatory mechanisms in OA with ADL issues that are still effective in OA without ADL problems. The results of the experiment confirmed this hypothesis. Indeed, OA with ADL problems measured in the current experiment showed differences between the experimental conditions. Both additional tasks increased the number of OA with the "minimal use of visual information" pattern while only the counting condition did in the original experiment. Additionally, the group of OA in the current experiment comprised a large proportion of OA with the "dragged by the background" pattern in both the baseline and balance conditions while only one participant showed this pattern in the baseline in the original experiment. The changes observed in OA with ADL problems in the presence of secondary tasks might reveal proprioceptive/vestibular and cognitive compensatory mechanisms that normally help to reduce deficits caused by being unable to ignore the background motion in our task. With age, these compensatory mechanisms might become no longer sufficient. This could explain the degradation in performance in the interception task observed in OA. Aging participants rely more and more on the visual system and are less and less able to ignore the task-irrelevant background and tend to be dragged by it when tapping (reverse effect of the illusion). This decline in compensatory mechanisms might then be responsible for transition 2 and could be a factor of poorer performance in the ADLs. LIMITATIONS The experiment described here is a first step toward the development of a diagnostic tool. However, and especially because different response patterns in different participants were found, further experiments, with a substantially larger number of participants are necessary to validate the tool and to link participants' characteristics (i.e., age and ADL score) to their pattern on the interception task, and to explore the modulating effect of task difficulty. For example, varying task difficulty may help to reveal individual transition points (transitions 1, 2 and 3 in Figure 7) that in turn might correlate with ADL score. Such an approach would benefit from a wide range in age and ADL issues. This way, it would be possible to identify more clearly the different key transitional points in terms of age and ADL scores, understand better the causes of these changes and choose best interventions in order to train ADL performance. The fact that some OA tend to tap toward the center of the screen instead of following the actual direction of the target motion could also be described as non-adherence to the task. This could reflect a genuine inability to perform the task or a lack of motivation. A lack of motivation can be caused by OA who do not trust their ability to perform the task, or by annoyance by the task that they found too difficult. Tapping toward the center of the screen lessens the physical and cognitive efforts needed for the task and reduces the total time of each block of trials because participants tapped the screen as fast as possible. This is in accordance with shorter times to tap found for the "minimal use of visual information" pattern as compared to the two other groups of OA. We observed large discrepancies between the participants. Most of the OA seemed to be really motivated and appeared to do their best in the task, however, a small amount of the participants seemed annoyed by the task and wanted it to be done as soon as possible. This motivational problem is not specific of our task. Indeed, even in pencil and paper tests, motivation could be an issue and could depend on the day in which the test is performed for the same participant. In order to further investigate this, developing and testing an easier version of the task would be necessary. CONCLUSION We transferred the previously tested interception task from a large screen to a portable tablet version. Although there were differences between both setups, the tablet version showed similar effects of the background illusion for comparable samples (i.e., the YA) and was able to reveal differences between age groups (three different patterns). This warrants further investigation of the tablet as potential tool in clinical practice, for instance by linking task performance (patterns) on the interception task to ADL scores on an individual basis. If the transitional model is valid and specific patterns correlate well with ADL scores, it would have an impact for clinical practice. In order to train individuals to help them live independently for a longer time, clinicians would need to identify where their patients are in our transitional model. Pattern one of the aging process reveals problems in multisensory integration (more specifically in appropriately weighing sensory inputs). Clinicians would then want to decrease these problems using specific multisensory integration training programs such as training programs to ignore irrelevant information in a multisensory environment. Pattern two of the aging process reveals multisensory integration issues as well and weaknesses in sensory compensatory mechanisms. Clinicians would then want to decrease these problems using specific multisensory integration training programs, as in the first pattern of the aging process, and specific proprioceptive/vestibular and cognitive training programs to restore the compensatory mechanisms. Pattern three of the aging process reveals target detection issues and motor problems. Clinicians would then want to target these specific problems using simple detection tasks and upper limb physical training programs. The interception task would be a valuable tool to distinguish the three patterns in clinical practice. ETHICS STATEMENT The experimental protocol was reviewed and approved by the TNO Internal Review Board on experiments with human participants (Ethical Application Ref: TNO-2017-015) and was in accordance with the Helsinki Declaration of 1975, as revised in 2013 (World Medical Association, 2013) and the ethical guidelines of the American Psychological Association. Participants received a monetary compensation for their participation and their travel expenses were reimbursed. All participants signed an informed consent form prior to the experiment.
v3-fos-license
2019-08-23T16:21:56.366Z
2019-07-29T00:00:00.000
201393093
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "http://bircu-journal.com/index.php/birle/article/download/376/pdf", "pdf_hash": "d38900ae1229496dc4f2b43be0c51f2180258d4b", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:19", "s2fieldsofstudy": [ "Education", "Computer Science" ], "sha1": "7210910e7ca8e9afb5b599837b7f0059c6c49bfc", "year": 2019 }
pes2o/s2orc
Development of Contextual-Based Interactive Multimedia of Tema Daerah Tempat Tinggalku of 4 Grade Students in Public Elementary School (SDN) 054919 Kacangan, Secanggang District, Langkat Regency This study aims to develop contextual-based interactive multimedia of Tema Daerah Tempat Tinggalku. This research is conducted at SDN 054919 Kacangan, Secanggang District, Langkat Regency. The population in this study are 4 grade students consisting of 30 students. The instruments which is used in this study are social studies learning outcomes tests and student response questionnaires. The data are collected and analyzed by quantitative descriptive analysis techniques. The method of this research is the development of Borg & Gall research combined with the learning development model of Dick & Carey. The trial subjects consists of learning material experts, learning design experts, two instructional media design experts, three students for individual trials, six students for small group trials and thirty students for large group trials. The results of the study show (1) The results of the trial of subject matter experts in the assessment of the content and feasibility of presentation are in very good criteria (88.09%); (2) The results of the learning design expert's test are in very good criteria (85.42%); (3) The results of the expert design study media I test are in sufficient criteria (73.21%) while the expert design test results of learning media II are on the sufficient criteria (83.92%); (4) The results of individual trials are in very good criteria (81.67%); (5) The results of the small group trial are in very good criteria (90.42%); (6) The results of a large group trial are in the criteria of good (73.2%). I. Introduction Primary Schools / Islamic Primary Schools (SD / MI) are early primary education before entering junior high school / Madrasah Tsanawiyah (Middle School / MTs). Education in elementary schools or Islamic elementary schools is focused on forming the personality and mentality of students (Prastowo, 2013: 14). Learning is a modification or strengthening behavior through experience. According to him, this experience can be obtained from the interaction between individuals and their environment (Wirnaningsih & Mardhatillah, 2016). Considering the importance of basic education in SD / MI, the government always strives to improve the quality and relevance of basic education through: curriculum development, teacher professional improvement, development quality and excellence of basic education, and development of teaching facilities and materials. To follow up on the relevance of education, the government is intensifying its curriculum reform and the procurement of relevant textbooks used in schools. This is because books are something that cannot be separated in the learning cycle. Without books, learning will be lame. The more supporting books, the more interesting learning will be. This is no different from the case with elementary school children who are still in the stage of concrete development, namely having to use interesting and contextual learning media, both from the display, and from the content. Therefore, it must use learning media as attractive as possible, especially the teaching materials used. Teaching materials are materials or subjects that are systematically arranged, which are used by teachers and students in the learning process. For this reason, teaching materials in the form of multimedia learning used should adopt contextual learning. Trianto (2010: 104) explains that contextual learning can help students associate the material they learn with the real situation of students and encourage students to make connections between the knowledge they have and their application in their lives as family members and communities. Furthermore Sagala (2013: 87) revealed that children's learning would be more meaningful if the child experienced what he learned, not knowing it. Target-oriented learning mastery of material is proven to be successful in competency in the short term, but fails to equip children to solve problems in the long run. The use of interactive multimedia in learning is able to make an abstract concept such as dynamic fluid become real with static visualization and dynamic visualization (animation), as well as the application of concepts contained in everyday life will feel easier because it can be visualized. The use of interactive multimedia is able to make a concept more attractive so that it is motivated to learn more and master it, besides being able to strengthen the response and control the speed of learning of students because interactive multimedia is interactive and independent (Mardhatillah & Trisdania, 2018). Nature of Learning Learning is the most important thing that must be done by humans to deal with environmental changes that are constantly changing at any time. Syah (2011: 68) argues that generally learning can be understood as a stage of changes in individual behavior that is relatively settled as a result of experience and interaction with the environment. cognitive. Learning is a process of change from behavior as a result of interaction with the environment in meeting their needs with characteristics: (1) changes occur consciously; (2) changes in learning occur are continuous and functional; (3) changes in learning are positive and active, meaning that the changes are constantly increasing and are aimed at getting something better than before; (4) changes in learning are not temporary, but are permanent; (5) changes in learning aim and direction; (6) changes in learning cover all aspects of behavior (Suryabrata, 2011: 233). The Nature of Social Studies Learning Social studies learning in class IV is learning that is adapted to the stages of cognitive development. Susanto (2013: 137) that the breadth of social studies covers a variety of social, economic, psychological, cultural, historical and political life, all of which are studied in this social sciences. Social Knowledge is one of the main subjects at the level of basic education. The existence of students with different social status and conditions will certainly face different problems in the course of their lives. Therefore, social studies learning is very important because the material obtained by students in schools can be developed into something more meaningful when students are in the community, both now and in the future. Social studies subjects in grade IV SD / MI consist of 3 x 35 minutes per week which includes four competencies, namely (1) spiritual attitude competence, (2) social attitudes, (3) knowledge, and (4) skills. This competence is achieved through the intracurricular, curricular, and / or extracurricular learning process. Social studies learning emphasizes the aspect of "education" rather than the transfer of concepts because in social studies learning students are expected to gain an understanding of a number of concepts and develop and train their attitudes, values, morals and skills based on the concepts they already have. IPS also discussed the relationship between humans and their environment. Community environment where students grow and develop as part of the community and are faced with various problems in the surrounding environment. Social studies learning outcomes are optimal results of students who include spiritual attitude competencies, social attitudes, knowledge and skills acquired by students after studying social studies by finding various information needed in the form of changes in behavior, knowledge and skills so that students are able to achieve maximum learning outcomes while solving problems related to social problems and applying them to people's lives. The Nature of Contextual Learning Berns and Erickson (2001: 2) explain "Contextual teaching and learning is a conception of teaching and learning that helps teachers relate subject matter content to real world situations; and motivates students to make connections between knowledge and its applications to their lives as family members, citizens, and workers; and engage in hard work that learning requires ". So, contextual learning is a learning approach that makes students more active in learning activities and helps students to be able to connect the knowledge gained in the classroom to the context in accordance with real life. The concept of learning that helps teachers to associate teaching material with the real world situation of the students, which can encourage students to make a connection between knowledge learned and its application in the lives of students as family members and communities (Sardiman, 2014: 222). Furthermore Johnson (2007: 14) states Contextual Teaching and Learning is a learning system that is based on the philosophy that students are able to absorb lessons when they capture meaning in the academic material they receive, and they grasp the meaning in school assignments if they can associate new information with the knowledge and experience they already had before. Contextual Teaching and learning is a learning concept that helps teachers connect the material they teach with the real world situation of students and encourages students to make connections between their knowledge and their application in their daily lives, involving seven main components of learning effective, namely: constructivism (Constructivism), asking (Questioning), finding (Inquiry), learning community (Learning Community), modeling (Modeling), reflection (Reflection), and actual assessment (Kadir, 2013: 25) . Musfiqon (2012: 28) reveals that more intact learning media can be used as an intermediary between teachers and students in understanding learning materials to be more effective and efficient. Asra (2009: 5) suggests that the word media in "learning media" literally means intermediary or introduction, while the word learning is defined as a condition created to make someone do something to learn. Learning media emphasizes the position of the media as a vehicle for channeling messages or learning information to condition a person to learn. Learning Media Learning media can represent what the teacher is less able to say through words. The abstractness of the material can be concrete in the presence of learning media. Arsyad (2011: 115) suggests that the use of teaching media in the teaching and learning process can generate new desires and interests, generate motivation and stimulate learning activities and even bring psychological influences to students. The use of teaching media at the teaching orientation stage will greatly help the effectiveness of the learning process and the delivery of lesson content at that time. In addition to arousing students' motivation and interests, teaching media can also help students improve understanding, present data with interest, facilitate interpretation of data and compact information. Learning media have different characteristics from one another. Hernawan (2007: 22-34) explains the characteristics of learning media according to their types, namely. a) Visual media is media that can only be seen. b) Audio media is media that can only be heard. c) Audio visual media is a combination of audio visual or commonly called hearing media. Interactive Multimedia According to Benny A. Prianto (2009: 212), multimedia is a program that is able to display elements of images, text, sound, animation, and video in a display that is controlled through a computer program. Daryanto (2010: 51) explains that multimedia is divided into two categories, namely linear multimedia and interactive multimedia. Daryanto (2010: 51), interactive multimedia is a multimedia that is equipped with a controller that can be operated by users, so users can choose what desired for the next process. Daryanto (2010: 52) explains that the selection of learning media with appropriate interactive multimedia will provide great benefits for teachers and students. In general, the benefits that can be obtained is that the learning process is more interesting, more interactive, the amount of teaching time can be reduced, the quality of student learning can be improved, and the learning process can be done anywhere and anytime, and student learning attitudes can be improved. III. Research Methods This type of research was Research and Development with the design of learning development models by Dick and Carey. Steps include: 1) Conducting preliminary research; 2) Making software designs; 3) Collection of materials; 4) develop contextual-based interactive multimedia of Tema Daerah Tempat Tinggalku; 5) Product reviews and trials; and 6) Test the effectiveness of the product. This study was conducted at SDN 054919 Kacangan, Secanggang District, Langkat Regency, which is located at Kota Kota Lama Dusun VI Kacangan Karang Gading Village. The timing of the research was conducted in the even semester of the school year 2018/2019. The subjects in this study were 4 th grade students of SDN 054919 as many as 30 students as a large group test and 6 students for a small group test. Data collection instruments in this development were in the form of assessment instruments to assess products that had been developed. In addition, questionnaires were also given to students, this questionnaire was used to obtain data on the attractiveness and accuracy of the material given to students which included aspects of media display and media content. In addition, data collection in this study is a test of learning outcomes, tests are used to assess students' abilities after using contextually based interactive multimedia. Before the test is used, first the test is tested for validity, reliability, level of difficulty, and the power of different questions. Learning Material Expert Validation The validation of the learning material experts on the development of contextually-based interactive multimedia on tema daerah tempat tinggal ku in 4 th grade students was conducted to obtain information that would be used to improve and improve the quality of media. The results of the validation were scores on aspects of social studies learning material that included eligibility, feasibility of presentation, comments and suggestions for improvement and conclusions From the results of the expert assessment of the learning material as a whole stated that the level of achievement of scores on the feasibility of content and the feasibility of presentation was 88.09 in the category of "Excellent". The results of the assessment of the material of economic activity and its relationship with various fields of work in the environment around the place of residence of students developed received several comments, including: (a) KD material is presented, (b) material illustrations are visually displayed video, (c) external cases the area is not optimal, (d) the glossary in the material does not yet exist, (e) the involvement of students is still in the form of problem training, and the advice is to improve it according to the results of the discussion. The conclusions from the assessment, comments and suggestions by experts in learning materials that interactive multimedia based contextually deserves to be tested in the field with revisions. Validation of Learning Design Experts Validation of learning design experts on the development of develop contextual-based interactive multimedia of Tema Daerah Tempat Tinggalku in 4 th grade of elementary school is getting information that will be used to improve and improve media quality from aspects of attractiveness, physical appearance, appropriateness of design, conformity of format, target characteristics, clarity media instructions, clarity of material exposure, and appropriateness of evaluation with material. The conclusion from the results of the assessment by the learning design experts as a whole can be concluded that the level of achievement of the score is 85.42 with the category "Excellent". The results of the assessment of learning design in contextual-based interactive multimedia development received several comments including: (a) the media created must be in accordance with the strategy / method / learning model, (b) each meeting must be displayed KI, KD, indicators and learning objectives, (c ) learning design includes the initial activities, the core, and the closing, (d) at the end of the interactive multimedia glossary must be made, and the advice is multimedia revisions according to the comments. Conclusions from assessment, comments and suggestions by learning design experts that contextually based interactive multimedia are worthy of being tested in the field with revisions. Validation of Learning Media Design Experts Based on the results of the assessment by instructional media design experts covering aspects of media display design, media programming design, and overall media design content it can be concluded that the achievement score of media design experts 1 is 73.21 where the range is at the 65-74 score level. categorized as "Fair", and the level of achievement of scores from media design experts 2 is 83.93 where the range is at the level of achieving a score of 75-84 categorized as "Good". The results of the assessment of media design experts 1 on the design of instructional media in contextual-based interactive multimedia development received several comments including: (a) the design was good, the menu display is also in accordance with the level of students who use it. used less interesting (c) use real and appropriate images. (d) media layout is less attractive, make it according to student needs. and the suggestion is that all data from the review of media experts are used as a basis for revising in order to perfect the content of learning media before being tested on students as users of development products. Conclusions from the assessment, comments and suggestions by learning design experts that contextualbased interactive multimedia are worthy of being tested in the field with revisions, and there are a number of comments including: (a) guidelines for use, (b) summaries are placed on each sub-theme. Conclusions from assessment, comments and suggestions by learning design experts that contextually based interactive multimedia are worthy of being tested in the field with revisions. Individual Trial Individual trials conducted in 4 th A grade of SDN 054919 Kacangan, Secanggang District, Langkat Regency consisting of 3 students consisting of 1 student with high achievement, 1 student with medium achievements, and 1 student with low achievement. From the results of individual trials can be concluded that the results of the assessment and responses of individual trials on the development of contextual-based interactive multimedia as a whole are 81.67%. Based on the assessment criteria interactive learning media are stated in the category "Excellent". Trial of Small Groups Small group trials were also conducted in 4 th A grade of SDN 054919 Kacangan, Secanggang District, Langkat Regency consisting of 6 students, consisting of 2 students with high achievements, 2 students with moderate achievements, and 2 students with low achievements. From the small group trials it can be concluded that the results of the assessment and responses of small group trials to the development of contextual-based interactive multimedia were obtained as a whole was 90.42%. Thus the response to the trial of the dominant small group gave a very good response to the quality of interactive multimedia based on context. a. Calculating the completeness of individual learning Based on the completeness criteria of individual learning outcomes compiled based on students' abilities, the percentage is classified in completeness criteria. 14.5 Based on the results of individual learning completeness data obtained based on the ability of students, it can be seen that out of 6 students, there were 2 students who were "incomplete" and there were 4 students who had "completed". b. Calculating Classical Learning Completeness Students' classical learning completeness can be calculated using the following formula: 100%= 67% Based on the data above there are 67% of students who have achieved KB ≥ 70%. After students' completeness in the individual and classical learning process is analyzed, the results of the pre-test and post-test are calculated with the gain score. Based on the test of the gain score obtained at 0.50, the gain score in the small scale test, which is 6 people, is classified as moderate. Based on the classical learning completeness data above there are 90% of students who have achieved KB ≥ 70%. After completing students in individual and classical learning in analysis, the results of the pre test and post test are calculated with the gain score. To see an increase in the value and effectiveness of multimedia developed between before and after using the optimized gain score formula. And based on the gain score the results obtained are 0.74, the gain score in the large scale test is high. Discussion To find out the feasibility of interactive multimedia based on contextualization, in this case Adobe Flash was conducted a validity test carried out by material experts, design experts, and media experts. Where each expert gives an assessment of each indicator contained in the sheet, the learning media validation is a quantitative descriptive assessment questionnaire revealed in the score distributor and rating scale category. Based on the material expert validation, it was found that the validation assessment was 84.09% with valid criteria, but there were still improvements from material experts. Material experts suggest to improve simple words that are understood by students. After revising the percentage to 92.11% validity with valid criteria. The validator also recommends using materials that are in accordance with the area of residence according to the theme. After discussing with material experts, contextual interactive multimedia based in this case in the form of Adobe Flash was revised based on input and suggestions from the validator. Based on the validation of learning design experts based on aspects of content, presentation, linguistic appearance and content, 78.57% were rated as good. The validator suggested that the colors in the multimedia used were more varied, and the size of the writing was slightly enlarged so that all students could see clearly. After revision, it is then appropriate to be used for students. Furthermore, the feasibility test of the media tested on individual students obtained a percentage of 81.67% and in small-scale trials of 6 people the results of the presentation were 90.42% with very good categories and very feasible to use. Based on the assessment given by the validator and also the assessment given by students on contextually-based interactive multimedia developed by the input provided by experts, interactive multimedia based on context in this case Adobe Flash which was developed said that validdan is feasible for use in learning. V. Conclusion Based on contextual interactive multimedia development on develop contextual-based interactive multimedia of Tema Daerah Tempat Tinggalku in SDN 054919 Kacangan, Secanggang Subdistrict, Langkat Regency and the discussion of the results of the studies discussed earlier, some conclusions can be drawn: 1) Feasibility of content and feasibility of presentation on media products "Based on the results of the trial trial with a percentage of 88.09%; 2) Test results of learning design experts were in excellent criteria (85.42%); 3) The results of the expert design test of learning media I were in sufficient criteria (73.21%) while the results of the expert design test for learning media II were in sufficient criteria (83.92%); 4) The results of individual trials are in excellent criteria (81.67%); 5) The results of the small group trials were in very good criteria (90.42%); 6) The results of the trial of large groups are in the criteria of good (73.2%). In addition, based on the results of the study, it could also be concluded that also contextual-based interactive multimedia development on Tema Daerah Tempat Tinggalku can improve student learning outcomes.
v3-fos-license
2018-12-03T20:47:41.888Z
2010-08-05T00:00:00.000
54536664
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://hess.copernicus.org/articles/14/1465/2010/hess-14-1465-2010.pdf", "pdf_hash": "0d9133c4e7a0e706394dd2f34581edcbbecc6df5", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:21", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "0d9133c4e7a0e706394dd2f34581edcbbecc6df5", "year": 2010 }
pes2o/s2orc
Using flushing rate to investigate spring-neap and spatial variations of gravitational circulation and tidal exchanges in an estuary Using flushing rate to investigate spring-neap and spatial variations of gravitational circulation and tidal exchanges in an estuary D. C. Shaha, Y.-K. Cho, G.-H. Seo, C.-S. Kim, and K.-T. Jung Department of Oceanography, Chonnam National University, Gwangju 500-757, Korea Korea Ocean Research and Development Institute, Ansan 426-744, Korea Received: 6 February 2010 – Accepted: 18 February 2010 – Published: 2 March 2010 Correspondence to: Y.-K. Cho ([email protected]) Published by Copernicus Publications on behalf of the European Geosciences Union. Introduction The net flow of water into and out of estuaries influences the exchanges with the surrounding coastal region, thus playing a role in controlling the estuarine environment by regulating the transport of nutrients, sediments, organisms and pollutants.Therefore, it is essential to understand estuarine hydrodynamic processes, such as tidal exchange and gravitational circulation exchange or some combination of the Correspondence to: Y.-K.Cho ([email protected]) two, which transport water and its constituents (Monsen et al., 2002;Ribeiro et al., 2004).The tidal exchange is more or less independent of the river discharge, whereas the gravitational circulation exchange is strongly dependent on the freshwater input in maintaining the longitudinal density gradient that controls an exchange flow, with seaward flow at the surface and landward flow at depth (Officer and Kester, 1991;Dyer, 1997).The competition between tidally induced vertical mixing and river generated buoyancy is the main factor determining the differences in water exchange. Spring-neap variations in tidal shear stress may result in spring-neap variations in tidally driven mixing, stratification and gravitational circulation (Jay and Smith, 1990;Uncles and Stephens, 1996;Monismith et al., 1996;Ribeiro et al., 2004;Savenije, 2005).Tide-driven shear mechanisms dominate in narrow, shallow estuaries, and strong tidal currents tend to suppress stratification through vigorous mixing, which inhibits gravitational circulation (West and Broyd, 1981;Uncles et al., 1985;Geyer, 1993;Savenije, 2005).However, the weaker turbulence during a neap tide could lead to an acceleration of gravitational circulation (Nunes Vaz et al., 1989;Monismith et al., 1996).Such influence of low mixing at neap tides is noted by Jay and Smith (1990) in the extent of salt intrusion into the Columbia River Estuary. Gravitational circulation is the most efficient mechanism for flushing river-borne pollutants out to the open sea (Nunes Vaz et al., 1989).In general gravitational circulation increases in strength with increasing river flow (Nguyen et al., 2008), which may cause a depletion of estuarine resident populations by advection from their resident position, particularly larval forms of bottom feeders and deep-water organisms, but also copepods and larval fish (Hough and Naylor, 1991;Jassby et al., 1995;Morgan et al., 1997, Kimmerer et al., 1998, 2002;Monismith et al., 2002).Recently, the spatiotemporal variations on the abundance of the demersal copepod, Pseudodiaptomus sp., in the Sumjin River Estuary (SRE) have been reported (Park et al., 2005).However, there D. C. Shaha et al.: Flushing rate for gravitational circulation and tidal exchanges has been no information on the physical aspects of the tidal exchange and gravitational circulation exchange for the SRE.Officer and Kester (1991) use the flushing rate to understand estuarine hydrodynamics and transfer processes that is related to the tidal exchange and gravitational circulation exchange.They calculate single flushing rate for the entire estuarine system.However, Monsen et al. (2002) reported that no single flushing rate for a system is valid for all time periods, locations and constituents, and it does not describe all transport processes.In addition, single flushing rate provides no information about the connections between transport and spatial heterogeneity of non-conservative quantities, such as specific conductivity, temperature and chlorophyll-a.As a single flushing rate does not represent the spatial variation of all transport processes, the flushing rate method was applied to multiple segments of the SRE to examine the spatial variability of flushing rate and to estimate gravitational circulation and tidal exchanges. The main focuses of this study are i) to understand the spatial variation of gravitational circulation exchange and tidal exchange between multiple segments of the SRE and the adjacent bay during spring and neap tides on the basis of the flushing rate and; ii) to suggest the flushing rate as an useful parameter to investigate the relative contribution of the tidal exchange and gravitational circulation exchange in transporting salt up-estuary. The rest of this paper is organized as follows.The data sources are briefly presented in Sect. 2. The methodology is described in Sect.3. The results are presented in Sect. 4. A discussion follows in Sect.5, with the conclusions in Sect.6. Study area and data The SRE is one of the few natural estuaries on the south coast of Korea.The watershed area, including farmland, is 4897 km 2 .The Sumjin River discharges into Gwangyang bay.The bay is connected in the south to the coastal ocean (South Sea) and in the east to Jinju bay via the narrow Noryang Channel (Fig. 1).The climate of Korea is characterized by four distinct seasons: spring (March, April, May), summer (June, July, August), autumn (September, October, November) and winter (December, January, February).The mean precipitation is 1418 mm a −1 , based on annual data from 1968 to 2001.Seasonal precipitation and runoff in the Sumjin River basin have decreased in spring and winter, but increased in summer (Bae et al. 2008).The tidal cycle is semi-diurnal, with mean spring and neap ranges of 3.33 and 1.02 m, respectively. The river discharge data used in this study were from the Songjung gauge station, located about 11 km upstream from CTD (conductivity-temperature-depth) station 24 operated by the Ministry of Construction and Transportation.The maximum monthly median river discharge was highest (370 m 3 s −1 ) in July 2006 and lowest (11 m 3 s −1 ) System was used to obtain the locations of the CTD stations (Fig. 1).The nominal distance between the CTD stations was 1 km.The measurements were started from the river mouth one and half hour before high water, and took about one and half hour to complete.On the basis of the stratification parameter, which is the ratio of the difference in salinity between the surface and bottom divided by the depth averaged salinity, well-mixed conditions were found near the mouth of the SRE, with partially-mixed conditions in the central and inner regimes during spring tide.In contrast, strong stratified conditions were found along the SRE during neap tide (Shaha and Cho, 2009). Methods The flushing rate (F ) is defined as the rate at which the index quantity is exchanged between multiple estuarine segments and the adjacent sea (Officer and Kester, 1991;Dyer, 1997).Officer and Kester (1991) have calculated single flushing rate near the mouth of Narragansett bay using average bay salinity to understand hydrodynamics and transport processes.A single flushing rate for the whole system does not describe the spatial variation of tidal exchange and gravitational circulation exchange.To calculate the flushing rate between multiple estuarine segments and the adjacent bay, the salt balance equation at steady state condition can be used as follows (Bowden and Gilligan, 1971;Savenije, 1993;Smith and Hollibaugh, 2006;Nguyen et al., 2008). where S is the salinity, A is the tidal average cross-sectional area and R represents the river discharge associated with residual flow.D is the dispersion coefficient D D 0 = S S 0 K where the subscript 0 refers to the situation at x = 0 and K is the dimensionless coefficient which is larger than zero (Savenije, 1993).The dispersion coefficient D has a large value near the mouth of estuaries, decreasing gradually in the upstream direction to become zero at the toe of salt intrusion curve.This relation adequately describes a combination of density-driven (also called gravitational circulation) and tide-driven dispersion (Savenije, 1993;Nguyen et al., 2008). The flushing rate F is then given for a single-layer system with multiple segments: (2) S mi is the salinity of the residual flow assumed at the boundary between the segment of interest and the adjacent source segment (usually the oceanic end member of the estuary).Smith and Hollibaugh (2006) suggested that this salinity can be estimated as the average of those two segments, S mi = (S 0 +S i )/2.S 0 is the salinity of the adjacent bay (oceanic end member of the SRE) and S i is the segment salinity of interest.The term (S 0 −S i ) is the box (segment) model equivalent of the horizontal salinity difference (Smith and Hollibaugh, 2006).If there is a measurable salinity difference between the segment of interest and the adjacent segment (s), Eq. ( 3) can be solved for exchange flow between many segments and the adjacent bay (Smith and Hollibaugh, 2006).When the salinity difference between the segments becomes small and indistinguishable from 0, this equation can not be solved for exchange flow because negative F has no physical meaning. To apply a steady state model in an estuary, it is necessary to examine how quickly the system adjusts to a new situation.If the time required for reaching the system in a new equilibrium is too long, a steady state model can not be used (Savenije, 2005).The residence time of the SRE varied from 3.6 days for the river discharge of 77 m 3 s −1 to 13.5 days for the river discharge of 10 m 3 s −1 , with an average of 7.3 and 7.5 days, during spring and neap tide, respectively.In view of the short residence time, SRE can be considered to be near steady state.Salinity versus river discharge relationships can also be used to make a validation of steady state approximations (Regnier et al., 1998).In that case, the solution of salinity versus river discharge should be unique at any position along a steady state estuarine system.Figure 2 shows the yearly evolution of salinity observed at CTD station 8 with various river discharges in the SRE.The approximate homogeneity in salinity was observed for the same river flow which also implies the steady state condition of the SRE. The quantity F (L 3 T −1 ) represents the combined effects of the tide-driven exchange and the gravitational circulation exchange, which equals the total longitudinal flux in the Hansen-Rattray estuarine parameter ν ( , 1965, 1966).F is used to quantify gravitational circulation exchange and tidal exchange.A plot of F versus R illustrates which exchange process is responsible in transporting salt into the system: (1) tidal exchange should be independent of the river discharge, but the gravitational circulation exchange should depend strongly on it (Officer and Kester, 1991;Dyer, 1997).If the tidal exchange is dominant over the gravitational circulation, then the flushing rate (F ) should be about constant for all river discharges (Dyer, 1997).This indicates that the total salt transport is caused entirely by tidal exchange and gravitational convection ceases i.e. mixing is dominant over advection.(2) if there were no tidal exchange, a plot of F against R would give a curve with an intercept at zero and increasing values of F for increasing values of R (Dyer, 1997).In that case, the tide-driven dispersive flux of salt is unimportant and upestuary salt transport is entirely dominated by gravitational exchange.Shaha and Cho (2009) found that gravitational circulation was much more effective than tide-driven dispersion in distributing an isohaline of 1 psu at the upstream end of the SRE. (3) If both tidal exchange and gravitational circulation are active, the plot of the flushing rate against the river discharge should have an intercept value (F int ) at zero river flow, which represents the tidal exchange, and the amounts in excess of the intercept value for various river discharge conditions represent the gravitational circulation exchange (G c ) (Officer and Kester, 1991;Dyer, 1997).In such condition, the salt transport has contributions from both tide-driven and density-driven processes.Thus, the transport of salt downstream by the river discharge is the sum of the gravitational circulation flux (G c ) and tide-driven dispersive flux (F int ). The tide-driven exchanges are dominant over gravitational circulation exchanges if F int > G c , and the gravitational circulation exchanges over tide-driven exchanges if F int < G c for a specific river discharge.The tide-driven dispersive flux of salt exceeds the gravitational flux during a period of decreasing freshwater input (Pilson, 1985;Fram et al., 2007;Nguyen et al., 2008). The estuarine parameter ν is the proportion of the tidedriven fraction (F int ) to the total upstream salt flux (F = F int + G c ) in an estuary given by Officer and Kester (1991) and Dyer (1997). If ν approaches 1, the upstream transport of salt is entirely dominated by tide-driven processes.If ν is close to 0, upestuary salt transport is almost entirely by gravitational circulation.If 0.1 < ν < 0.9, the system experiences a contribution of both gravitational circulation and tide-driven circulation to the upstream salt transport (Hansen and Rattray, 1966;Valle-Levinson, 2010).Equation (3) implies that the system is well mixed.However, the SRE shows relatively strong stratification during neap tide and well mixed condition with weak vertical salin-ity gradients during spring tide (Shaha and Cho, 2009).Therefore, the flushing rate has been calculated using Eq. ( 3) for spring tide considering the system as single layer with multiple segments.In contrast, the flushing rate equation has been modified for two layer circulation during neap tide with multiple segments.For two-layer circulation analytical protocol of Gordon et al. (1996) is followed to calculate the flushing rate. A steady state in which volume is conserved has volume fluxes The oceanic water enters the bottom layer; flows upward (Q bi ) into the surface layer and out (Q si = F i ) to the ocean.The volume of outflow is associated with freshwater inputs. If salt flux through the mouth is dominated by the exchange flux, then the net salt balance is where S bi is the bottom salinity and S si is the surface salinity of each segment.The inflow from the oceanic end member to the segment of interest through boundary (i = 1...12) reads the following. Flushing rate is then given for two-layer system with multiple segments: Equation ( 8) can be solved for exchange flow between many segments and the adjacent bay only if there is a salinity difference between the bottom salinity (S bi ) and the surface salinity (S si ).As the SRE is stratified during neap tide, the salinity of the upper stratified layer was averaged to obtain S si .In this study, the bottom salinity is considered as S b0 (= S b1 = ... = S b10 ).Landward of 18 km depth average salinity is used to calculate flushing rate considering the system as one layer due to the absent of two layer circulation. To obtain more accurate estimation of the flushing rate at the boundary between segments, the SRE was divided into twelve segments, with 2 km in length.The segmentation provides a box model approximation of the longitudinal salinity gradients.The flushing rate was calculated between multiple estuarine segments and the adjacent bay in response to the daily average freshwater input.Two CTD stations were allocated for each segment to obtain average salinity (S i ) of that segment.The salinity of the seawater entering the SRE via Gwangyang bay varied with time.The CTD station 1 was treated as the seaward boundary for reference salinity of the seawater (S b0 ) entering in the bottom and S si of the seawater surface salinity near the mouth. Hydrol.Earth Syst.Sci., 14,[1465][1466][1467][1468][1469][1470][1471][1472][1473][1474][1475][1476]2010 www.hydrol-earth-syst-sci.net/14/1465/2010/ Comparing two distributions with each other, it appeared that the buoyancy force as a freshwater input was the main differentiating factor.Although the river discharge was 77% larger in 2006 than that in July 2005, the increased river discharge did not hinder the inflow of 27 psu salt wedge from Gwangyang bay to the observation point but increased the vertical salinity gradient of 50%.The difference in salinity between two subsequent high tides was approximately 3 psu.As the high water height differences between two subsequent high tides were 0.71 m in 2005 and 0.67 m in 2006, this difference might cause that variation. Longitudinal variation of salinity Tide and river discharge are two main forcing factors controlling estuarine circulation (deCastro et al., 2004;Savenije, 2005;Ji, 2008).Changes in the tidal currents with the springneap tidal cycle can result in fortnightly modulation of stratification and gravitational circulation (Simpson et al., 1990;Ribeiro et al., 2004).Longitudinal transects of salinity were carried out at high water during both spring and neap tides in each season from August 2004 to April 2007.The vertical sections showed a relatively well mixed to partially mixed structure during spring tide (Figs. 4 and 5).The highest saline water of 33 psu appeared at the lower portion of the estuary over the entire observation periods during spring, 2005 (Fig. 5b).In addition, an isolated structure of saline water appeared about 9 km upstream from the mouth (Fig. 5d, g and h).This phenomenon may have occurred due to mixing between seawater and freshwater inflow from the tributary (Hwangchon), as the density structure is largely controlled by the freshwater input in the coastal water and the density structure followed that salinity distribution.The vertical sections of salinity (2006,2007) during spring and neap tides have not been shown here owing to their similar structures.Strong stratification appeared during neap tide in all seasons (Figs. 4 and 5).As the tidal amplitude decreased during neap tide, the weakened effect of the bottom friction strengthens the gravitational circulation, causing strong stratification.Strong stratification and gravitational circulation during neap tide was noted by Monismith et al. (1996) in Northern San Francisco bay.On the basis of the stratification parameter, the SRE transitions from partially or wellmixed during spring tide to stratified during neap tide (Shaha and Cho, 2009).In general, the strength of stratification was comparatively increased in summer due to the high river discharge (77 m 3 s −1 ) than other seasons (Fig. 5g). Spring-neap and spatial variations of gravitational circulation exchange and tide-driven exchange To investigate the spatial variations in gravitational circulation exchange and tide-driven exchange during spring and neap tides, the flushing rate was calculated at twelve boundaries starting from the oceanic end member to upstream end by dividing the SRE into twelve segments.The schematic diagram (Fig. 6) illustrates the calculation of flushing rate for well and partially mixed condition during spring tide (upper) and for stratified condition during neap tide (lower) with low and high river discharge (R) conditions.At low river discharge condition during spring tide (neap tide), the flushing rate increased by a factor of 100 ( 9) from the upstream end to the mouth of the SRE.At high river discharge during spring tide (neap), the flushing rate increased by a factor of 23 ( 5) from the upstream end to the mouth of the SRE. Figure 7 shows the spatial variations of the mean flushing rate during spring and neap tides.The mean flushing rate was extremely heterogeneous, ranging from 13 to 842 m 3 s −1 .The standard deviation became very large at the oceanic end member as the denominator of Eqs. ( 3) and ( 8) approached to about zero.The small fluctuation of this denominator yielded very sparse flushing rate near the mouth which caused large scale standard deviation.The flushing rate was approximately four times greater near the mouth during spring tide due to the larger tidal amplitude than during neap tide.The flushing rate varied significantly between spring and neap tides landward of 7 km from the mouth of the SRE.This length is consistent with the observed median tidal excursion length of 6.8 km (Shaha and Cho, 2009).The higher the flushing rate, the more efficient the downstream of the SRE is to flush out.Landward of 8 km the flushing rates were approximately the same during spring and neap tides due to decreasing tidal effects in the central and inner regimes (Table 1).Volume average salinity (S) for the SRE, ocean salinity (S 0 ), and river discharges (R) for various periods of the year along with calculated flushing rate (F ) during spring and neap tides are given in Table 2. Figure 8 is a plot of flushing rate for the average salinity of the SRE, which is a single flushing rate, versus the river discharge during spring and neap tides.This represented a combined effect of the tide-driven exchange and gravitational circulation exchange in transporting salt upstream.The intercept value (F int ) provides the tidal exchange whereas the amounts in excess of the intercept value (here expressed by G c ) for various river discharge conditions give the gravitational circulation exchange.Tide-driven dispersive flux of salt exceeded gravitational circulation flux at river discharge of <20 m 3 s −1 .This combined effect depicts only the general exchange characteristics of the SRE without differentiating spatial variations.Such combined effect was noted by Officer and Kester (1991) for the entire Narragansett bay. To understand the spatial variations in the tide-driven exchange and gravitational circulation exchange along the SRE during spring tide, the flushing rate for each estuarine segment was plotted against the river discharge (Fig. 9).The tidal exchange was dominant over gravitational circulation exchange near the mouth (SEG1-3) during spring tide where the flushing rate was about constant for all river discharges.The flushing rate data were relatively sparse at the oceanic end member as the small fluctuation of the denominator of Eq. ( 3) yielded large scale flushing rate.Although the standard deviation was large, the fitting provides a constant value of F indicating tide-driven dispersive flux of salt which was supported by the calculation of potential energy anomaly (see in discussion) and estuarine parameter ν (=1.04).Tide-driven dispersive flux dominated over gravitational circulation exchange in transporting salt landward of 7 km from the mouth of SRE during spring tide due to the larger tidal amplitude of the spring cycle.McCarthy (1993), Savenije (1993Savenije ( , 2005) ) and Nguyen et al. (2008) reported that the gravitational circulation is weak near the mouth of exponentially varying estuary, and tide-driven exchange is dominant.On the basis of the stratification parameter, well-mixed conditions were also found near the mouth of the SRE during spring tide (Shaha and Cho, 2009), where the large tidal amplitude enhanced the turbulent mixing.Moreover, the tidal dominancy appeared in the observations of the variations in the diurnal salinity taken at CTD station 7 with low (28 m 3 s −1 ) and high (120 m 3 s −1 ) river discharges during spring tide, as shown in Fig. 3.In both river discharge cases, the same salinity of 27 psu appeared at CTD station 7 during high tide (Fig. 3a and b).This indicates that the tide was dominant in pushing the salinity of 27 psu from Gwangyang bay to the observation point, where the freshwater input was negligible.The combined contribution of tidal exchange and gravitational circulation exchange were found in segments 4 to 10 during spring tide (Fig. 9), where the SRE shows partially stratified condition on the basis of the stratification parameter (Shaha and Cho, 2009) and also as a function of potential energy anomaly.Gravitational circulation exchange and tidedriven dispersive flux were differed with the salt content rate of change for various river discharges.Tide-driven dispersive flux of salt dominated at low river discharge (10 m 3 s −1 ) condition in these segments.Fram et al. (2007) found that tide-driven dispersive flux of salt exceeds gravitational circulation during a period of decreasing river discharge.However, gravitational circulation exchange exceeded tide-driven dispersive flux at river discharge of 50 m 3 s −1 in the upper segments (SEG6-11).As the M 2 tidal amplitude decreased by 12% within the estuary relative to the mouth (Table 1), the consequent reduction in the vertical mixing and increasing river discharge enhanced gravitational circulation by increasing potential energy on the water column.Due to the decrease in tidal amplitude, the gravitational circulation has intensified in Ariake bay (Yanagi and Abe, 2005).As a con- sequence, the combined contribution of tidal exchange and gravitational circulation exchange were important in transporting salt landward of 8 km.In contrast, gravitational flux was much more effective in transporting salt to segment 11 and 12 during spring tide, which is governed by the river flow.Savenije (1993) and Nguyen et al. (2008) also found that gravitational circulation exchange is dominant in the upstream part. and the tide (neap).Strong stratified conditions were found during neap tide along the SRE as a function of the stratification parameter (Shaha and Cho, 2009) and the potential energy anomaly in the water column.Pulsing of stratification and gravitational circulation is easily understood in terms of the significant reduction in vertical mixing during neap tide due to the weakened effect of the bottom friction (Monismith et al., 1996).Gravitational circulation exchange was entirely dominant in transporting salt to segment 11 during neap tide, which is governed by the river flow. The estuarine parameter ν (Hansen-Rattray parameter), as determined from Eq. ( 4), obviously illustrates which exchange process is responsible in transporting salt up-estuary.On the basis of the estuarine parameter ν (Fig. 11), the tidedriven dispersion was almost constant landward of 6 km from the mouth of the SRE during spring tide where ν > 0.9.This length is consistent with the observed median tidal excursion length of 6.8 km (Shaha and Cho, 2009) where gravitational circulation ceased and the upstream transport of salt was entirely dominated by tide-driven dispersion (Fig. 9, SEG1-3).From 7 to 19 km both gravitational circulation and tide-driven dispersive fluxes had contributions in transporting salt during spring tide (Fig. 9, SEG4-11) where 0.1 < ν < 0.9.Both gravitational circulation and tide-driven dispersive fluxes were also important in transporting salt from 1 to 19 km during neap tide where 0.1 < ν < 0.9.Landward of 20 km gravitational circulation exchange was much more effective than tide-driven dispersion during spring and neap tides where ν < 0.3.MacCready (2004) found ν = 0 at the upstream end and ν ∼ 0.8 near the mouth of the Hudson River Estuary.The results of Savenije (1993), Mac-Cready (2004) and Nguyen et al. (2008) are consistent with this calculation. Exchange processes based on potential energy anomaly The horizontal density gradient has important implications in generating tidally varying stratification and gravitational circulation (Savenije, 2005).Strong stratification events are periods of intense gravitational circulation (Monismith, 1996) which are linked through straining of the salinity field.Tidal straining is the result of velocity shear acting on a horizontal density gradient creating oscillations in the stratification of the water column (Murphy et al., 2009).To determine the influence of tidal straining on water column stratification, which is strongly linked with gravitational circulation, the potential energy anomaly φ of the water column for each CTD cast was calculated.Following the approaches of Simpson et al. (1990) andde Boer et al. (2008), the potential energy anomaly is the amount of work necessary to completely mix the water column (Jm −3 ) which can be calculated from with the depth average density where ρ is the vertical density profile over the water column of depth H , z is the vertical coordinate and g is the gravitational acceleration (9.8 ms −2 ).For a given density profile, φ (Jm −3 ) represents the amount of work required to completely mix the water column. Variations in the tidal exchange and gravitational circulation exchange naturally arise through neap-spring variations in the presence of the potential energy anomaly along the SRE. Figure 12 depicts the spatial variation in the potential energy anomaly φ during spring and neap tides along the SRE.Each contour of the spring and neap tides was the average for twelve samples obtained from August 2004 to April 2007.The strength of tide-driven mixing and stratification varied significantly between the spring and neap tides, up to approximately 20 km landward, as a function of φ. The amount of φ was less than 10 Jm −3 during spring tide near the mouth (landward of 7 km) of the SRE due to tidally driven turbulent mixing (Fig. 12).Burchard and Hofmeister (2008) have examined the dynamics of the potential energy anomaly at a location, where the water column is fully destabilized during flood, with a range of φ between 0 and 9 Jm −3 .Therefore, it can be assumed that tide-driven exchange may be dominant, when φ is <10 Jm −3 .In addition, the tidal exchange zone extends roughly a tidal excursion from the mouth (Signell and Butman, 1992).The median tidal excursion observed in the SRE was 6.8 km (Shaha and Cho, 2009), that indicating the tidal exchange zone.Signell and Butman (1992) also found rapid exchange in the tidal mixing zone.As per the stratification parameter, well-mixed conditions were found near the mouth (landward about 7 km) of the SRE during spring tide (Shaha and Cho, 2009), that also indicating the tidal exchange zone.Thus, the tidal exchange zone, as determined from the flushing rate near the mouth during spring tide, is consistent with the function of φ, stratification parameter and tidal excursion. In contrast, the amount of φ increased to 10∼24 Jm −3 during spring tide landward of 7 km from the mouth.The water column contained more φ in the central and inner regimes of the SRE relative to the mouth due to the reduction in mixing as the M 2 tidal amplitude decreased by 12% within the estuary (Table 1).As a consequence, both gravitational circulation exchange and tide-driven exchange were important in transporting salt in the central and inner regimes of the SRE (landward of 7 km) under various river discharge conditions. Increased values of φ in the water column (27 < φ < 67 Jm −3 ) represents stronger stratification during neap tide than during spring tide.The weaker turbulence during a neap tide could lead to the reduction in mixing and the consequent increase of φ in the water column enhanced gravitational circulation.Both gravitational circulation exchange and tidedriven exchange were contributed in transporting salt into the SRE.Gravitational circulation exchange is not necessarily proportional to the potential energy anomaly in the water column.The dynamics of the potential energy anomaly has examined by Burchard and Hofmeister (2008) on a strongly stratified water column, where gravitational circulation exchange exists over the whole tidal period, with a range of φ between 45 and 60 Jm −3 .The link between stratification and gravitational circulation develops during neap tide due to the significant reduction in vertical mixing, which is the result of the weakened bottom friction (Nunes Vaz et al., 1989;Monismith et al., 1996). There was no substantial amount of φ required to completely mix the water column landward of 20 km, where gravitational circulation exchange was entirely dominant in transporting salt during spring and neap tides (Fig. 11).This also indicates that the tidal effects decreased at landward of 20 km from the mouth, and the combined effects between tidal exchange and gravitational circulation exchange started from this location.Thus, the tidal exchange and gravitational circulation exchange, calculated using the flushing rate, showed consistency with the function of potential energy anomaly. Effects of exchange processes on spatial heterogeneity of chlorophyll-a The flushing rate for the entire SRE could not provide information about the connections between the transport and spatial heterogeneity of non-conservative quantities such as chlorophyll-a.Conversely, the calculation of the flushing rate for multiple segments of the SRE provided strong clues about the importance of transport processes in shaping the spatial patterns of chlorophyll-a.Park et al. (2005) conducted a study on spatiotemporal fluctuations in the abundance of the demersal copepod, Pseudodiaptomus sp., in the SRE during spring tide.The chlorophyll-a concentrations measured during their research period are shown in Table 3.The concentrations were lowest at upstream end where salt transport almost entirely by gravitational circulation (Shaha and Cho, 2009).As gravitational circulation exchange may increase in strength with increasing river flow, advective transport of estuarine resident populations will increase (Monismith et al., 2002).Owing to this advective loss, the chlorophyll-a concentration was lowest at upstream end segment.However, the chlorophyll-a concentration was highest at the distance of 20 km due to the advection by the river flow from upstream end.This region is the landward limit of haline stratification as a function of φ where there is no net landward and seaward movement of water.As a result, Pseudodiaptomus sp.maintained their positions in this region. Conversely, lower concentration of chlorophyll-a was also found near the mouth of the SRE during spring tide.Strong tide-driven exchange was found near the mouth during spring tide (Figs. 7,9,11 and 12), which caused advection of planktonic organisms from the mouth of the SRE to Gwangyang bay.As a result, the concentration was lower near the mouth. Conclusions Single flushing rate of the SRE did not show the spatial variation of exchange characteristics.To investigate spatial and spring-neap variations of gravitational circulation and tidedriven exchanges the flushing rate was calculated between multiple segments of the SRE and the adjacent bay.The tides caused rapid exchange in the vicinity of the mouth during spring tide compared to neap tide.The stratification and water column stability was found to vary in different sections of the SRE as a function of potential energy anomaly which modulated gravitational circulation and tide-driven exchanges. The tide-driven dispersion was dominant at low river discharge condition along the SRE during spring and neap tides.Tide-driven dispersive flux of salt was also dominant near the mouth (landward of 7 km) during spring tide for all river discharge conditions observed due to the larger tidal amplitude where gravitational circulation ceased.However, both gravitational circulation and tidal exchanges were contributed in transporting salt in the central and inner regime during spring tide due to the reduction in tidal amplitude that caused partially stratified conditions.The combined contributions of two fluxes were also appeared during neap tide along the SRE.Gravitational circulation exchange entirely dominated for salt flux at the upstream end during spring and neap tides. These results furnished strong clues about the spatially varying abundance of planktonic organisms in the SRE.Results suggested the use of spatially varying flushing rate to estimate tide-driven and gravitational circulation exchanges, and also to understand the distributions of living biomass and suspended particles in an estuary. Fig. 1 . Fig. 1.Map of the study area.The solid circles indicate the CTD stations.The star mark denotes the locations of the Gwangyang (GT1 and GT2) and Hadong (HT) tidal stations. Fig. 2 . Fig. 2. Salinities observed at 8 km far from the mouth of the Sumjin River Estuary as a function of river discharge. Fig. 4 . Fig. 4. Salinity distributions for all longitudinal depth surveys of the Sumjin River Estuary at high water during spring (a, b) and neap (c, d) tides during summer and autumn of 2004.The black solid circles indicate the CTD stations. Fig. 5 . Fig. 5. Salinity distributions for all longitudinal depth surveys of the Sumjin River Estuary at high water during spring (a, b, c, d) and neap (e, f, g, h) tides in each season of 2005.The black solid circles indicate the CTD stations. Fig. 6 . Fig.6.Schematic diagram illustrating the calculation of flushing rate at the boundary between two adjacent segments for well and partially mixed condition during spring tide (upper) and for stratified condition during neap tide (lower) with low (10 and 11 m 3 s −1 ) and high (50 and 44 m 3 s −1 ) river discharges (R).In two layer system during neap tide, the outflow (F i ) from the surface layer to the ocean through boundary is the sum of deep flow volume (Q bi ) plus river discharge. Fig. 8 . Fig. 8. Plot of the flushing rate (F ) calculated for the average salinity of the Sumjin River Estuary against the river discharge (R) during spring and neap tides.The intercept value, F int , denotes the tide-driven exchange.The amounts in excess of F int for the various river discharges indicate the gravitational circulation exchange (G c ). Fig. 9 . Fig. 9. Plot of the flushing rate (F ) against the river discharge (R) for various segments of the Sumjin River Estuary during spring tide.The intercept value, F int , indicates the tidal exchanges.The amounts in excess of the F int value for various river discharges indicate gravitational circulation exchange (here expressed by G c ). Gravitational circulation exchanges entirely dominate at the upstream end. Fig. 10 . Fig. 10.Plot of the flushing rate (F ) against the river discharge for various segments of the Sumjin River Estuary during neap tide.The intercept value, F int , indicates the tidal exchanges.The amounts in excess of the F int value for various river discharges indicate gravitational circulation exchange (here expressed by G c ). Gravitational circulation exchanges entirely dominate at the upstream end. Fig. 11 . Fig. 11.Spatial variation of median estuarine parameter (ν) during spring and neap tides along the Sumjin River Estuary.For ν ∼ 1, upestuary transport of salt entirely by tide-driven mixing.For ν ∼ 0, up-estuary salt transport almost entirely by gravitational circulation.For 0.1 < ν < 0.9, both gravitational circulation and tide-driven circulation contribute to transporting salt up-estuary. Fig. 12 . Fig. 12. Spatial variation in the potential energy anomaly during spring and neap tides along the Sumjin River Estuary.Each contour is the average of twelve samples obtained from August 2004 to April 2007. Table 1 . Observed tidal amplitudes for M 2 , S 2 , K 1 and O 1 at the Gwangyang and Hadong tidal stations. Time-series observations of the salinity distribution were conducted over a tidal cycle during spring tide at station 7 on 21-22 July 2005 and on 15-16 June 2006.The longitudinal transects for salinity and temperature were carried out at high water during both spring and neap tides during each season Hydrol.Earth Syst.Sci., 14, 1465-1476, 2010 www.hydrol-earth-syst-sci.net/14/1465/2010/ from August 2004 to April 2007 using a CTD profiler (Ocean Seven 304 of IDRONAUT Company).A Global Positioning Table 2 . Volume average salinity (S) for the Sumjin River Estuary, ocean salinity (S 0 ), and river discharges (R) for various periods of the year along with calculated flushing rate (F ) during spring and neap tides.
v3-fos-license
2018-04-03T04:13:51.161Z
2004-03-12T00:00:00.000
38211143
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.jbc.org/content/279/11/10814.full.pdf", "pdf_hash": "cf1d56da664e55a95f2a7e1e610be1251721ce51", "pdf_src": "Adhoc", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:24", "s2fieldsofstudy": [ "Biology" ], "sha1": "762d415609c445024b580e3ea2e954d29707ec7c", "year": 2004 }
pes2o/s2orc
Optimum Amyloid Fibril Formation of a Peptide Fragment Suggests the Amyloidogenic Preference of (cid:1) 2 -Microglobulin under Physiological Conditions* (cid:1) 2 -Microglobulin ( (cid:1) 2 m) is a major component of amy- loid fibrils deposited in patients with dialysis-related amyloidosis. Although full-length (cid:1) 2 m readily forms amyloid fibrils in vitro by seed-dependent extension with a maximum at pH 2.5, fibril formation under physiological conditions as detected in patients has been difficult to reproduce. A 22-residue K3 peptide of (cid:1) 2 m, Ser 20 –Lys 41 , obtained by digestion with Acromobacter protease I, forms amyloid fibrils without seeding. To obtain further insight into the mechanism of fibril formation, we studied the pH dependence of fibril formation of the K3 peptide and its morphology using a ThT fluorescence assay and electron microscopy, respec-tively. K3 peptide formed amyloid fibrils over a wide range of pH values with an optimum around pH 7 and contrasted with the pH profile of the seed-dependent extension reaction of full-length (cid:1) 2 m. This suggests that once the rigid native-fold of (cid:1) 2 m is unfolded and additional factors triggering the nucleation process are pro-vided, full-length (cid:1) 2 m discloses an intrinsic potential to ␤ 2 -Microglobulin (␤ 2 m) is a major component of amyloid fibrils deposited in patients with dialysis-related amyloidosis. Although full-length ␤ 2 m readily forms amyloid fibrils in vitro by seed-dependent extension with a maximum at pH 2.5, fibril formation under physiological conditions as detected in patients has been difficult to reproduce. A 22-residue K3 peptide of ␤ 2 m, Ser 20 -Lys 41 , obtained by digestion with Acromobacter protease I, forms amyloid fibrils without seeding. To obtain further insight into the mechanism of fibril formation, we studied the pH dependence of fibril formation of the K3 peptide and its morphology using a ThT fluorescence assay and electron microscopy, respectively. K3 peptide formed amyloid fibrils over a wide range of pH values with an optimum around pH 7 and contrasted with the pH profile of the seed-dependent extension reaction of full-length ␤ 2 m. This suggests that once the rigid native-fold of ␤ 2 m is unfolded and additional factors triggering the nucleation process are provided, full-length ␤ 2 m discloses an intrinsic potential to form amyloid fibrils at neutral pH. The fibril formation was strongly promoted by dimerization of K3 through Cys 25 . The morphology of the fibrils varied depending on the fibril formation conditions and the presence or absence of a disulfide bond. Various fibrils had the potential to seed fibril formation of full-length ␤ 2 m accompanied with a characteristic lag phase, suggesting that the internal structures are similar. Amyloid fibrils are recognized as being associated with the pathology of more than 20 serious human diseases and the responsible peptides or proteins specific to these diseases have been identified (1)(2)(3)(4)(5). These fibrils are characterized by a cross-␤ structure where ␤-strands are perpendicularly oriented to the axis of the polymeric fibril (6 -8). Moreover, various proteins and peptides that are not related to diseases can also form amyloid-like fibers, implying that formation of amyloid fibrils is a general property of polypeptides (7, 9 -14). Clarifying the mechanism of amyloid fibril formation is essential not only for understanding the pathogenesis of amyloidosis but also for improving our understanding of the mechanism of protein folding. Dialysis-related amyloidosis is a common and serious complication among patients on long term hemodialysis (15,16), in which ␤ 2 -microglobulin (␤ 2 m) 1 forms amyloid fibrils. Native ␤ 2 m, made of 99 amino acid residues, corresponds to a typical immunoglobulin domain (Fig. 1) and is a component of the type I major histocompatibility antigen (17)(18)(19). Although the increase in ␤ 2 m concentration in blood over a long period is the most critical risk factor causing amyloidosis, the molecular details remain unknown. Recently ␤ 2 m, because of its relatively small size, which makes it suitable for physicochemical studies, has been used as a target for extensive studies addressing the mechanism of amyloid fibril formation in the context of protein conformation (20 -28). We have studied the mechanism of fibril formation and their conformation with fibrils prepared by a seed-dependent extension reaction (29 -36). Intriguingly, the seed-dependent reaction has an optimum at pH 2.5 (37,38), which is far from the physiological pH at which amyloid fibrils are deposited in patients. The minimal sequence provides various pieces of information useful for addressing amyloid fibril formation. It is likely that the minimal sequence includes the initiation site for amyloid fibril formation of the whole molecule. Moreover, it may constitute the core of the consequent amyloid fibrils. We found under conditions without agitation that although efficient fibril formation of whole ␤ 2 m at pH 2.5 requires seeding, K3 peptide can form amyloid fibrils spontaneously, suggesting that the free energy barrier for fibril formation by the K3 peptide is lower than that of whole ␤ 2 m (30). In this paper, to obtain further insight into the mechanism of amyloid fibril formation of ␤ 2 m, we studied the pH dependence and role of the disulfide bond on amyloid fibril formation of the K3 peptide. The results show that an optimum for K3 fibril formation exists at neutral pH, suggesting that, although unfolding of the rigid native-fold and additional factors triggering the nucleation process are necessary, the intrinsic amyloidogenic preference of ␤ 2 m has an optimum at neutral pH. EXPERIMENTAL PROCEDURES Recombinant ␤ 2 m and K3 Peptide-Recombinant human ␤ 2 m was expressed in the methylotrophic yeast, Pichia pastoris, and purified as described (29,30). Three fractions with 6 (Glu-Ala-Glu-Ala-Tyr-Val-), 4 (Glu-Ala-Tyr-Val-), and 1 (Val-) additional amino acid residues added to the N-terminal (Leu) of intact ␤ 2 m were obtained. These additional residues were derived from the signal sequence in the expression vector. The second peak with 4 additional amino acid residues was the major peak, and this fraction was used as the intact ␤ 2 m. K3 peptide was obtained by digestion of ␤ 2 m with lysyl endopeptidase from Achromobacter lyticus (Achromobacter protease I, Wako Pure Chemical, Osaka, Japan) as described previously (30). For preparation of the K3 peptide, we used a crude ␤ 2 m fraction made of a mixture of species with different N-terminals. The K3 peptide has a cysteine residue, Cys 25 ( Fig. 1). Alkylated K3, in which Cys 25 was alkylated by iodoacetamide, was prepared by incubating 100 M K3 with 10 mM iodoacetamide in 50 mM Tris-HCl buffer at pH 9.0 for 1 h. A K3 dimer with a disulfide bond between Cys 25 was prepared by air oxidation for 3 days in the same buffer. Both K3 derivatives were separated by reverse phase high performance liquid chromatography (30). Polymerization Assay-Fibril formation of K3 and its derivatives were performed at 37°C using a final peptide concentration of 100 M for the K3 monomers and 50 M for the K3 dimer. The buffers used were glycine-HCl (pH 1.5-3.0), sodium acetate (pH 3.5-5.5), MES-NaOH (pH 6.0 -7.0), and Tris-HCl (pH 7.5-8.5). The concentration of the buffers was 50 mM and all contained 100 mM NaCl. The lyophilized K3 and its derivatives were dissolved in 100% Me 2 SO and then diluted into the buffer solutions. The final concentration of Me 2 SO was less than 1% (v/v). We confirmed that 1% (v/v) Me 2 SO does not affect the standard seed-dependent extension reaction with intact ␤ 2 m. For seeding experiments, the sonicated K3 fibrils at a concentration of 5 g/ml were used as seeds. Amyloid fibril formation of the full-length ␤ 2 m was carried out using a standard seed-dependent extension at 25 M ␤ 2 m at 37°C (37,38). It must be noted that the protein solution was not agitated during the standard extension reaction. The seed fibrils of full-length ␤ 2 m used were the 6th generation from original fibrils purified from patients and were extended with monomeric recombinant ␤ 2 m. ThT Fluorescence Assay-The polymerization reaction was monitored by fluorometric analysis with ThT at 25°C as described previously (37). The excitation and emission wavelengths were 445 and 485 nm, respectively. From each reaction tube, 5 (for time course measurements) or 7.5 l (for pH-dependence measurements) was taken and mixed with 1.5 ml of 5 M ThT in 50 mM glycine-NaOH buffer (pH 8.5) and the fluorescence of ThT was measured using a Hitachi fluorescence spectrophotometer, F4500. We confirmed that the pH of the ThT assay solution is not affected significantly by the addition of the protein solution at acidic pH. Transmission Electron Microscopy-Reaction mixtures (2.5 l) were diluted with 25 l of distilled water. These diluted samples were spread on carbon-coated grids and were allowed to stand for 1 to 2 min before excess solution was removed with filter paper. After drying the residual solution, the grids were negatively stained with 1% phosphotungstic acid (pH 7.0) and once more excess solution on the grids was removed by filter paper and dried. These samples were examined under a Hitachi H-7000 electron microscope with an acceleration voltage of 75 kV. CD-Far UV CD measurements were carried out with a Jasco spectropolarimeter, J-720, at 20°C using a cell with a light path of 1 mm. The protein concentration was 25 M and the results are expressed as the mean residue ellipticity [] (deg cm 2 dmol Ϫ1 ). RESULTS pH Dependence of K3 Fibril Formation-In the present study the fibril formations were examined at various pH conditions, and the amounts of fibril formed were determined from ThT binding using the standard method at pH 8.5 (see "Experimental Procedures"). Although fibrils prepared at different pH conditions might have different ThT binding capabilities, thus producing some uncertainty in estimating the amounts, we consider that the present method is the most reliable available. Purified and lyophilized K3 peptide was difficult to completely dissolve in water under acidic pH conditions, exhibiting turbidity and a high background as monitored by ThT fluorescence. Among the various solvent conditions examined, we found that a high concentration of Me 2 SO, which was also useful for dissolving ␤ 2 m amyloid fibrils (31,44), could completely dissolve the K3 peptide. Therefore, we used 100% Me 2 SO to prepare a stock solution of the K3 peptide, which was mostly used within the same day. Under acidic pH conditions, full-length recombinant ␤ 2 m forms amyloid fibrils by the seed-dependent extension reaction with an optimum at pH 2.5 (29,37,38). We observed no fibril formation without seeding even after incubating for several days. However, it has been shown that agitation can induce fibril formation of ␤ 2 m without seeding (45). In accordance with this, we confirmed that agitation of the solution with seeds further accelerated seed-dependent fibril formation. 2 In contrast, the K3 peptide formed amyloid fibrils spontaneously at pH 2.5 (30). The kinetics depended on peptide concentration whereby the higher the peptide concentration the faster the reaction. At 100 M K3, a lag phase of 3 h was observed and the reaction ended after about 6 h incubation ( Fig. 2A). The lag phase shortened with increasing K3 concentration, such that the overall reaction accelerated (data not shown; see Fig. 3B of Ref. 30). Although it is likely that agitation also accelerates fibril formation of the K3 peptide, all the experiments were carried out without agitation, where most of the reactions ended after 24 h (Fig. 2). We examined the pH dependence of the spontaneous fibril formation of the K3 peptide at 100 M as monitored by ThT fluorescence (Fig. 3A). Because most of the reactions ended at 24 h (Fig. 2), the profile was constructed by measuring the fibril formation at 24 h. Intriguingly, the ThT fluorescence intensity after 24 h incubation increased gradually with increasing pH to a maximum at pH 7.5. At pH values above 7.5, a sharp drop in ThT fluorescence was noted. A time course of fibril formation showed a lag phase for all the pH values examined (Fig. 2). The Role of the Disulfide Bond-K3 peptide has a free thiol group at Cys 25 , such that it can form a homodimer with an intermolecular disulfide bond. Disulfide bond formation by air oxidation is promoted by increasing pH. Thus, it is likely that 2 Y. Ohhashi and Y. Goto, unpublished results. disulfide bond formation occurs during incubation of K3 under neutral and alkaline conditions, although in our study we started the reaction with monomeric K3 that was purified by high performance liquid chromatography. In fact, analysis of K3 fibrils prepared in alkaline pH regions by high performance liquid chromatography revealed a significant amount of dimer formation during incubation (i.e. 50% at pH 7.5). A small fraction of dimer was observed even for fibrils prepared in weakly acidic pH conditions (data not shown). To analyze fibril formation of the K3 monomer separated from the dimer, we added 10 mM DTT to the reaction mixture. Between pH 2.0 and 7.0, the profile of the ThT fluorescence intensity after 24 h (Fig. 3B) was similar to that without DTT (Fig. 3A). Consistent with this, both at pH 2.5 and 6.0, the kinetics of fibril formation did not depend largely on the presence or absence of DTT, although a small change of lag time was observed (Fig. 2, A and B). On the other hand, the pH profile of the ThT fluorescence after 24 h showed a considerable difference between pH 7.0 and 8.0 (Fig. 3B). The kinetics of fibril formation at pH 7.5 showed that addition of DTT suppressed fibril formation significantly (Fig. 2C). These results indicated that at pH 7.5 the fibrils are formed predominantly by a disulfide bonded dimer. The pH profile of the fibril formation of monomeric K3 peptide in 10 mM DTT revealed a maximum at pH 5.5-6.5. To confirm the role of the disulfide bond, the K3 dimer was prepared by air oxidation, purified by high performance liquid chromatography, and fibril formation was examined (Fig. 3D). As anticipated, ThT fluorescence after 5 h showed a dramatic increase. The pH region showing strong ThT fluorescence extended widely from pH 2 to 8, with an optimum at pH 6.0 -7.5. The maximal ThT value at pH 7.5 was about 2-fold larger than that of the K3 monomer, although we used the same peptide concentrations with respect to the monomer: 100 M for the K3 monomer and 50 M for the K3 dimer. ThT fluorescence was minimum at pH 3, and increased slightly at pH values below 3. The kinetics of fibril formation showed that the lag phase disappeared at pH 6.0 and 7.5 and a shortened lag phase of 30 min was observed at pH 2.5 (Fig. 2). The rapid kinetics at pH 6.0 and 7.5 resemble the standard fibril extension reaction of the full-length ␤ 2 m at 25 M and pH 2.5, which was completed in 2 h (29,30). Thus, formation of the disulfide bond accelerated fibril formation as well as increasing the amount of fibrils monitored by the ThT fluorescence binding at pH 8.5. As a control, we prepared an alkylated K3 peptide in which the thiol group of Cys 25 was alkylated by iodoacetamide. Below pH 5.0, fibril formation of the alkylated K3, as monitored by ThT fluorescence (Fig. 3C), was similar to that of K3 in 10 mM DTT (i.e. K3 with a free thiol) (Fig. 3B). However, no increases in ThT fluorescence above pH 6.5 were observed, suggesting that the acetamide group introduced steric hindrance preventing the formation of tightly packed amyloid fibrils above pH 6.5. The results also suggested the presence of at least two types of fibrils made of monomeric K3 peptide. The first, with thiol groups exposed to the solvent, was mainly formed at pH values below 6.5 and was not affected by alkylation of the thiol groups. The other, with thiol groups buried and formed above pH 6.5, could not be formed once the thiol group was alkylated because of steric hindrance introduced by the alkyl groups. On the other hand, alkylation of Cys 25 prevents its ionization. It is possible that the prevention of ionization also contributes to the reduced amyloidogenicity above pH 6.5, although the details are unknown. Fibril Morphology-Figs. 4 and 5 show electron micrographs of fibrils of K3 and its derivatives prepared at pH 2.5 and 6.5, respectively. At pH 2.5, two types of monomer K3 fibrils were mainly observed. One of the monomer K3 fibrils (Fig. 4F) was similar to fibrils of intact ␤ 2 m prepared at pH 2.5 (Fig. 4E), i.e. twisted thick fibrils with maximal and minimal diameters of 16.5 and 10 nm, respectively, and relatively short in length. The other was clustered thin filaments with a variety of diameters (Fig. 4G). The overall morphology of K3 (Fig. 4A), K3 in the presence of 10 mM DTT (Fig. 4B), and alkylated K3 fibrils (Fig. 4C) appeared similar to each other. Under these conditions, clustered filaments predominated and the fraction of thick filament was low. At pH 6.5, the monomer K3 fibrils observed (Fig. 5, A-C) were again categorized into two types: one with a diameter of ϳ10 (Fig. 5E) and the other with a diameter of about 5.5 nm (Fig. 5F). The former had a slightly twisted morphology with a longitudinal periodicity of about 60 nm, whereas the twist was not clear in the latter. The former are likely to consist of two latter filaments judging from their observed diameters. In general, the length of the fibrils prepared at pH 6.5 was longer than fibrils prepared at pH 2.5. The K3 dimer produced thin fibrils both at pH 2.5 (Fig. 4D) and 6.5 (Fig. 5D), whereas the fibril length was shorter (100 -200 nm) than those of the K3 monomer fibrils at both pH values. The kinetics of fibril formation at pH 6.0 and 7.5 (Fig. 2, B and C) was rapid without a lag phase, similar to the standard fibril extension reaction of the full-length ␤ 2 m, suggesting that the K3 dimer had a strong potential for initiating fibril formation. It is likely that the shorter length of the fibrils produced by the K3 dimer is related to its strong potential of fibril formation. Seeding Effects-In a previous paper (30), we showed that the K3 fibrils prepared at pH 2.5 work directly as seeds in the fibril formation of the K3 peptide at the same pH (i.e. the homogeneous extension of K3 with the K3 seeds). Upon addition of preformed K3 fibrils to the K3 peptide at pH 2.5, the ThT fluorescence increased smoothly without a lag phase, in contrast to the fibril formation of the K3 peptide alone with a lag phase of several hours (see Fig. 6 of Ref. 30). Here, we examined if the same is true of the fibrils prepared at neutral pH. K3 seeds were prepared by sonicating the K3 fibrils extended at pH 6, and fibril formation was examined at pH 6.0 in the presence of seeds (Fig. 6A). The lag phase disappeared, indicating the seeding effects of preformed K3 fibrils. The absence of a lag phase suggested a smooth homogeneous extension reaction without a high energy barrier (30). Similar experiments were carried out at pH 7.5, where the reduced K3 cannot form fibrils spontaneously (Fig. 6B). In the presence of 10 mM DTT at pH 7.5, although no fibril formation was detected for 26 h, a gradual increase of ThT fluorescence was observed in the presence of K3 seeds, indicating that seeding enables fibril formation even for the reduced K3 at pH 7.5. As described before, in the absence of DTT and seeds, an increase in ThT fluorescence occurred with a lag phase (open circles in Fig. 6B, see also Fig. 2C) because of the slow oxidation of thiol groups and consequent fibril formation. Under these conditions, we observed acceleration of fibril formation by seeding (solid circles in Fig. 6B). The difference of the reaction curves in the absence and presence of DTT (i.e. line with solid circles, line with solid squares) arises from the contribution to fibril formation of disulfide-bonded K3, which occurs with time at pH 7.5. We also examined cross-reaction between K3 fibrils and fulllength ␤ 2 m by the standard fibrils extension reaction at pH 2.5 (Fig. 6C). Without agitation, full-length ␤ 2 m requires seed fibrils for smooth amyloid fibril formation at pH 2.5, where ␤ 2 m is substantially unfolded (29,33). Although no reaction occurred in the absence of seeds, fibril extension proceeded rapidly in the presence of ␤ 2 m seeds and was completed in 3 h. We previously reported that K3 fibrils prepared at pH 2.5 work as seeds in the extension reaction of ␤ 2 m at 35 M and pH 2.5, although the lag phase of about 10 h still remained (30). The remaining lag phase suggested the difficulty of heterogeneous reaction between K3 seed and monomeric ␤ 2 m, in contrast to the homogeneous extension of the K3 seed with monomeric K3. Here, we examined whether or not the K3 fibrils prepared at neutral pH were able to work as seeds for formation of the ␤ 2 m fibrils at pH 2.5. First, K3 seeds prepared at pH 2.5 were added to a solution of monomeric ␤ 2 m at pH 2.5. Fibril formation of ␤ 2 m with a lag phase of 8 h was observed in the presence of K3 seeds (Fig. 6C), which is consistent with previous results (30). The K3 seeds prepared at pH 6.5 also showed the seeding effect, although it was accompanied by a slightly longer lag phase of 10 h (Fig. 6C). K3 dimer seeds prepared at pH 7.5 seeded ␤ 2 m fibril formation to an extent similar to K3 monomer seeds prepared at pH 6.5. These results indicate that fibrils of K3 although they were formed at different pH conditions have similar internal structures, even though they are not the same, which makes seeding of ␤ 2 m fibrils possible. DISCUSSION pH Dependence of Fibril Formation-For many amyloidogenic globular proteins, destabilization of the native globular with intact ␤ 2 m seeds (circles), with K3 seeds prepared at pH 2.5 (triangles), with K3 seeds extended at pH 6.5 (inverted triangles), with dimeric K3 seeds extended at pH 7.5 (diamonds), and without seeds (squares). The concentration of seeds was 5 g/ml. state by either mutations or introduction of unstable conditions is highly correlated with formation of amyloid fibrils, suggesting that denaturation or unfolding of the native state is a critical event in triggering amyloid formation (46 -48). In other words, denaturation of the native state is assumed to be a necessary condition of amyloid fibril formation although it is not a sufficient condition in itself. Moreover, because the nonnative state is an intermediate or precursor of amyloid fibril formation, the conformational propensity of the denatured state is likely to be a determinant for amyloidogenicity. Consistent with this, studies with various mutants of acylphosphatase have demonstrated that hydrophobicity and ␤-sheet propensity of key regions, which are distinct from those parts important for protein folding, as well as the net charge of the protein, are critical factors for aggregation (49). These intrinsic properties are proposed as being key factors in determining aggregation rates of various proteins and peptides (50). Naiki et al. (37) established with ␤ 2 m that amyloid fibrils similar to those purified from patients can be formed in vitro by the seed-dependent extension reaction at pH 2.5. Because ␤ 2 m is substantially unfolded at pH 2.5 (Fig. 3F), the optimal fibril formation at pH 2.5 (Fig. 3E) is consistent with the idea that unfolding or destabilization of the native state is required to initiate fibril formation. However, in patients with dialysisrelated amyloidosis, amyloid fibrils are formed under physiological conditions at neutral pH. Although an increased concentration of ␤ 2 m is the most critical risk factor, other factors that destabilize the structure of ␤ 2 m and, moreover, trigger the fibril formation are not elucidated, and the mechanism of how amyloid fibrils are formed under physiological conditions remains unknown. Importantly, several reports have proposed the in vitro formation of ␤ 2 m amyloid fibrils under physiological conditions (20,23,27). In the present paper, we used the K3 peptide, a 22-residue proteolytic peptide, to explore the mechanism of amyloid fibril formation under physiological conditions. K3 peptide can form amyloid fibrils without seeding, indicating that the free energy barrier for the nucleation process of K3 fibril formation is lower than that of the whole ␤ 2 m molecule. The pH dependence of K3 fibril formation revealed that pH 2.5, which is the optimal pH for the seed-dependent extension reaction of full-length ␤ 2 m (Fig. 3E), is not optimal for K3. Instead, a maximum was observed at pH 7.5 for the K3 peptide without DTT (Fig. 3A). Consideration of the role of the disulfide bond revealed that reduced and oxidized K3 peptides have optimal fibril formation at pH 6 and 7.5, respectively, and that the disulfide bond accelerates fibril formation significantly. Thus, it is likely that the optimal pH for fibril formation of unfolded ␤ 2 m is also close to physiological conditions. One of the most important obstacles for fibril formation of whole ␤ 2 m at neutral pH is the folding to the native state. Moreover, it is likely that additional factors are required to trigger fibril formation considering the high free energy of the nucleation process. The present results imply that, once unfolded and the additional factors are provided, whole ␤ 2 m has an optimal potential to form amyloid fibrils at neutral pH. Recently, Radford and co-workers (27,28) studied the relationship between structural stability and amyloid formation with various mutants of ␤ 2 m. They indicated that, whereas destabilization of the native state is important for the generation of amyloid fibrils, the population of specific denatured states is a pre-requisite for amyloid formation. They proposed that perturbation of the N-and C-terminal edge strands (i.e. ␤A and ␤G) is an important feature in the generation of assemblycompetent states of ␤ 2 m (27). The unpairing of ␤A strand has also been suggested by other groups (18,20) to be a critical event leading to the amyloidogenic partially unfolded interme-diates. These results are basically consistent with ours in that ␤ 2 m has an intrinsic potential to form amyloid fibrils under physiological conditions once the rigid native structure is destabilized. We also prepared various mutants of ␤ 2 m, in which proline was introduced to each of the ␤-strands (36). These mutations affected the seed-dependent amyloidogenic potential at pH 2.5 to various degrees. The amyloidogenicity of mutants showed a significant correlation with stability of the amyloid fibrils, and little correlation was observed with that of the native state, indicating that stability of the amyloid fibrils is an additional key factor determining the amyloidogenic potential of the proteins. To obtain further insight, we considered the net charge of the ␤ 2 m and K3 peptide (Fig. 3). K3 peptide has three positive (i.e. ␣-amino group, His 31 , and Lys 41 ) and five negative (i.e. Cys 25 , Asp 34 , Glu 36 , Asp 38 , and ␣-carboxyl group) titratable groups, and its isoelectric point, which is calculated on the basis of their pK a values, is around pH 4.5 (Fig. 3). The net charge of the Cys-reduced K3 peptide at pH 6 is Ϫ1 and Ϫ2 at pH 7. Although slightly different, the net charge profiles of other K3 derivatives are similar to that of the reduced K3 peptide. Under acidic and basic pH regions, repulsion between the charged residues becomes a barrier for forming fibrils. On the other hand, under optimal pH conditions for fibril formation of K3, the net charge is Ϫ1 to Ϫ2. López de la Paz et al. (7) showed with a series of amyloidogenic peptides that a net charge less than Ϯ2 is one of the requisites for determining amyloidogenicity. Optimal fibril formation of the K3 peptide at pH 7 is consistent with their observations. However, the pH profile for fibril formation does not agree exactly with that of the net charge, suggesting that although the net charge is an important factor, additional factors determine the pH profile. We also plotted the pH dependence of the net charge of the whole ␤ 2 m molecule (Fig. 3E). The protein is highly positively charged (i.e. ϩ17) at pH 2.5. This is one of the reasons why ␤ 2 m is acid-unfolded at pH 2.5. Moreover, because of this high net charge, ␤ 2 m remains unfolded in the absence of seeds. The addition of seed affects the energetics, dramatically transforming ␤ 2 m to the fibrillar state even in the presence of high net charge repulsion. This suggests that, under favorable conditions, even peptides and proteins with a high net charge can form amyloid fibrils. Because the net charge of whole ␤ 2 m at neutral pH is close to zero, fibril formation at neutral pH is not surprising in the context of electrostatics. The intriguing observation in this context is the depolymerization of fibrils at pH 7 (38). Refolding of the monomeric ␤ 2 m to the native state is an important driving force for depolymerizing the fibrils at pH 7. Therefore, additional factors for stabilizing the fibrils, e.g. apolipoprotein E, are required to prevent depolymerization, thus promoting deposition of fibrils under physiological conditions (38). The depolymerization of K3 fibrils at pH 7, which were prepared at pH 2.5, was also noted (30). On the other hand, K3 fibrils prepared at pH 7 were stable at pH 2.5. 3 These results suggest that detailed structures of amyloid fibrils depend on the pH at which they were prepared. Whereas the basic folds of protofilaments are the same, the modes of association to form the mature fibrils might be different depending on pH. Further study is necessary to clarify the pH-dependent stability of amyloid fibrils. Role of the Disulfide Bond-For several amyloidogenic proteins including prion, intermolecular disulfide bond formation has been suggested to be linked with amyloid fibril formation (51)(52)(53)(54). Intriguingly, we observed a dramatic increase in amy-loidogenicity by disulfide bond formation. The lag phase of the spontaneous fibril formation was shortened at pH 2.5 and disappeared at pH 6 and 7.5 (Fig. 2). The intensity of ThT fluorescence increased markedly for all pH values examined. The increase in amyloidogenicity because of disulfide bond formation has also been observed for an 11-residue peptide, Asn 21 -His 31 , in the K3 region (42). As measured by EM, amyloid fibrils of disulfide bonded K3 are short at both pH 2.5 and 6.5 compared with those of the reduced or alkylated K3 monomers (Figs. 4 and 5). This suggests that, although the disulfide bond promotes fibril formation, rapid and extensive reactions fail to cooperatively form longer fibrils. Previously, we reported the role of the disulfide bond in conformation and amyloid fibril formation of intact ␤ 2 m (29). In amyloid fibrils of ␤ 2 m, more than 50% of amide protons mostly located at the central regions of the protein are highly protected from H/D exchange, which is distinct from the protected amide protons of the native state protein (31). However, the reduced ␤ 2 m, in which the disulfide bond between Cys 25 and Cys 80 (Fig. 1) is reduced, cannot form straight needle-like amyloid fibrils, although this molecule has both of the minimal sequence regions. Instead, the reduced ␤ 2 m forms flexible, thin filaments, suggesting that the mature fibril is made of several filaments. Recovery of rigid fibril formation of the K3 peptide (Ser 20 -Lys 41 ) upon removal of non-essential regions indicates that regions other than the minimal region prevent fibril formation of the reduced ␤ 2 m. We have suggested that flexibility of non-core regions, which increases when the disulfide bond in reduced, inhibits formation of mature rigid fibrils (33). The increased amyloigodenicity of the dimeric K3 peptides connected by a disulfide bond indicate that linkage of the minimal region by the disulfide bond without introducing a non-essential flexible region contributes positively to fibril formation. The amyloid fibrils are formed by the intermolecular association of amyloidogenic peptides or proteins. Thus, dimer formation by the disulfide bond is anticipated to increase effectively the chance of intermolecular encounters by increasing the effective concentration of monomers, unless the disulfide bond introduces inhibitory effects by steric constraint. It is emphasized that in our case increased fibril formation resulted in formation of a large amount of short filaments. Conclusion-Intact full-length ␤ 2 m forms amyloid fibrils in vitro by seed-dependent extension with a maximum at pH 2.5. However, pH 2.5 is far from the physiological conditions, and the validity of the results from the pH 2.5 experiments with regard to addressing the mechanism of fibril formation under physiological conditions has to be questioned. In the present paper, we showed that a 22-residue K3 peptide, which probably constitutes both the initiation and core region of the ␤ 2 m amyloid fibrils, forms amyloid fibrils with an optimum at around pH 7. This strongly suggests that, although the rigid nativefold of ␤ 2 m at neutral pH values prevents amyloid fibril formation, ␤ 2 m has an intrinsic amyloidogenic preference at neutral pH. Thus, once the native structure is destabilized under conditions where the seeds or yet unknown additional factors triggering the fibril formation are available, it is likely that ␤ 2 m starts to form amyloid fibrils in a manner similar to that observed at pH 2.5. Moreover, the present results exhibit a variety of fibril morphologies of K3 depending on pH and disulfide bond. Although different, the K3 fibrils can all promote fibril formation of the whole ␤ 2 m by seeding effects. The seeding effects varied and the K3 fibrils prepared under the same pH conditions were more effective, suggesting that, whereas the basic internal structures are common, slight differences in the higher order structures determines the efficiency of heterogeneous seeding.
v3-fos-license
2021-03-03T12:25:34.305Z
2021-01-01T00:00:00.000
235060052
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.37188/lam.2021.002", "pdf_hash": "5271d5e72e613d2c7e350061e9d343aafda91652", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:25", "s2fieldsofstudy": [ "Physics" ], "sha1": "5271d5e72e613d2c7e350061e9d343aafda91652", "year": 2020 }
pes2o/s2orc
3 D-printed miniature spectrometer for the visible range with a 100 × 100 μm 2 footprint The miniaturisation of spectroscopic measurement devices opens novel information channels for size critical applications such as endoscopy or consumer electronics. Computational spectrometers in the micrometre size range have been demonstrated, however, these are calibration sensitive and based on complex reconstruction algorithms. Herein we present an angle-insensitive 3D-printed miniature spectrometer with a direct separated spatial-spectral response. The spectrometer was fabricated via two-photon direct laser writing combined with a super-fine inkjet process. It has a volume of less than 100 × 100 × 300 μm. Its tailored and chirped high-frequency grating enables strongly dispersive behaviour. The miniature spectrometer features a wavelength range of 200 nm in the visible range from 490 nm to 690 nm. It has a spectral resolution of 9.2 ± 1.1 nm at 532 nm and 17.8 ± 1.7 nm at a wavelength of 633 nm. Printing this spectrometer directly onto camera sensors is feasible and can be replicated for use as a macro-pixel of a snapshot hyperspectral camera. Introduction The field of micro-optics has been transformed by the use of femtosecond direct laser writing as a 3D printing technology since the early 2000s. Both the complexity and surface quality have advanced from simple microlenses 1,2 and achromats 3 to multi-lens and multi-aperture objectives 4,5 . A similar development has occurred with diffractive optics, which have evolved from simple gratings 6 to stacked diffractive microlens systems 7 . Along with imaging optics, photonic crystals 8 , waveguides 9,10 , and collimators 11 , complex beam shapers [12][13][14] have been demonstrated. This rapid development reflects the significant potential of 3D printing technology in micro-optics. Advancements have enabled access to the millimetre scale and larger with 3D printing technology [15][16][17] . Meanwhile, the almost unlimited optical design freedom that it offers makes further miniaturisation possible, thereby increasing the given functionality in decreasing volumes on the micrometre scale. The aim of this study was to enhance both the complexity and miniaturisation of 3D-printed micro-optics and to create an entire integrated measurement system, a spectrometer, in a 100 × 100 × 300 μm 3 volume. The miniaturisation of spectroscopic measurement devices opens novel information channels for size-critical applications. For instance, medical engineering, consumer electronics, and downscaled chemical engineering could benefit from a cost-effective, efficient, and readily integrated spectroscopic micro-device. Spectra could be retrieved from the tip of a distal-chip endoscope with a bending radius smaller than that of an optical fibre to explore regions that are otherwise inaccessible. With the size of one or two orders of magnitude smaller than a smartphone camera objective, integration into consumer electronics is realisable for applications such as skin disease diagnosis 18 and counterfeit bank note detection 19 . Furthermore, chemical micro-reactors (such as that proposed by Potyrailo et al. 20 ) could be further miniaturised by precise absorption and emission line observation via 3D-printed miniature spectrometers integrated into a microreactor platform. Various concepts for miniaturised spectrometers have been demonstrated and commercialised. There are three main categories for these spectrometers: direct, computational and filter-based. Spectrometers in the first category create a spatial-spectral response by redistributing incoming light, which can be measured directly by a line or image sensor. Spectrometers in the second category create a mixed but spectrum-dependent unique signal from which the original spectrum can be computationally reconstructed (wavelength multiplexing). The third category combines all concepts with narrowband wavelength filtering per filter patch. The 3D-printed spectrometer presented in this paper is assigned to the first class of direct spectrometers. To the best of our knowledge, direct spectrometers presented in the literature possess a footprint area that is at least two orders of magnitude larger than our 3D-printed spectrometer (see Fig. S1). These range from commercial grating spectrometers 21,22 and scanning micro-opto-electromechanical system spectrometers 23 to integrated approaches with digital planar holograms 24 or ringresonator enhancement 25 . An exceptionally high ratio of bandwidth per resolution has been reached with a grating multiplexing approach in a centimetre-size range 26 . Meanwhile, the footprint is reduced by two to six orders of magnitude in our 3D-printed spectrometer, whereas the spectral bandwidth per resolution ratio of these direct spectrometers can be one to three orders of magnitude higher. In the computational spectrometer category, several new technologies have been proposed, such as colloidal quantum dot 27 or photonic crystal slab 28 patches for wavelength multiplexing. The single patches are in the 100-micrometre size range; however, an entire spectrometer consists of multiple patches. Thus, the filter size must be multiplied accordingly. It is our understanding that only two spectrometers have been demonstrated in which the system sizes are of the same order of magnitude as our footprint of 100 × 100 μm 2 . They are based on nanowires 29,30 or disordered photonic structures 31 , and they have a bandwidth per resolution ratio that is similar to the one in our approach. However, owing to wavelength multiplexing, the system must be calibrated with an iterative reconstruction algorithm to deduce the original spectrum. In contrast, we present an angularly insensitive spectrometer with a directly separated spatial-spectral response. The third category, filter, based spectrometers-is based on traditional concepts, such as dyes or Fabry-Pérot etalons, along with recent concepts, such as nanowire grids, plasmonic nanohole arrays, quantum dots 32 , colloidal quantum dot sensors combined with Bragg mirror filter arrays 33 , and ultrafast pyroelectric metasurfaces 34 . However, owing to their narrow bandwidth filtering, a considerable portion of the signal is lost. In contrast, our 3D-printed spectrometer collects light with a numerical aperture (NA) that is larger than 0.4, a light collection area of 50 × 50 μm 2 at the first lens, and redistributes the wavelengths with minimal losses. Overall, the 3D-printed miniature spectrometer presented herein marks an innovation for 3D-printed micro-optics in terms of complexity, while exploiting for the first time the micrometre size range for direct spectrometers. y We assessed the performance of the spectrometer in a miniature volume of 100 × 100 × 300 μm 3 for a wavelength range of 490 nm to 690 nm. In the design concept (Fig. 1a), the strengths of 3D printing are considered: freeform surfaces and inherent near-perfect alignment owing to simultaneous mount and lens fabrication, which enable heavily tilted and asymmetrical surfaces. Incoming light is collected by a cylindrical lens with a quadratic aperture of 50 × 50 μm 2 , which corresponds to an NA of 0.42 in the -direction at the slit. The slit is realised not in air but in a photopolymer to serve simultaneously as an ink barrier. The diverging rays exiting the slit are collimated by the subsequent surface. This collimation lens is terminated by a tilted surface to compensate for the grating deflection angle in advance, thereby keeping the spectrometer within its narrow footprint. The grating surface is then illuminated by the collimated rays. The last surface serves as an imaging lens to separate the wavelengths at an output plane nearby, such as for an image sensor in future applications. The full spectrometer thus has a length below 300 μm, which is inside the high-precision piezo z-stage range of the 3D printer. This last surface is also tilted with respect to the grating surface and thus restricts the chief ray angles (CRAs) at the output image plane to a maximum angle of 16° at 690 nm wavelength. Therefore, each field could be coupled efficiently without crosstalk into an image sensor pixel with CRA limitations. The implementation of the described design principle is shown as a sequential ray-tracing model in Fig. 1b. In extending the general design concept, slight curvatures and aspherical extensions were permitted for the tilt and grating surfaces to minimise aberrations. Detailed surface type descriptions are given in the Materials and Methods section. The most important functional surfaces of the spectrometer are highlighted on the right side of the figure. Because the spectrometer size is in the micrometre range, diffraction must be considered for the slit design. The slit width was designed to be 1.3 μm to create a single slit diffraction pattern with an opening angle that is suitable for the collection NA of 0.42 (with regard to its first minima positions at 550 nm). The grating is designed to consist of as many lines as can be manufactured to maximise the spectrometer resolution. The fabrication process generally accommodates feature sizes down to 100 nm 35 . However, if the photoresist is printed as a bulk material, as in the case of the grating, the resolution decreases owing to the accumulated laser energy, and thus polymerisation increases in the laser focus vicinity. Consequently, the grating period was restricted to a minimum period of 650 nm to be sufficiently resolved (see Fig. S3). The first diffraction order of the grating is used as the spectrometer measurement signal. The grating topography was thus established for maximum diffraction into the first diffraction order. Both the zeroth and second order are spatially separated such that the measurement signal is not affected. The period is chirped and spans from 650 nm to 860 nm across the grating width of 38 μm. This is a considerably smaller portion of the 100 μm footprint for two reasons. First, the total length of the spectrometer is restricted to < 300 μm. Therefore, the beam paths for collecting, expanding, and collimating the light were constrained. Second, the width of the collecting lens is limited to 50 μm; hence, a 25 μm circumferential gap enables the needle to access and apply the super-fine inkjet process described later in this section. The combination of these two restrictions leads to the beam width of 38 μm at the grating surface. Ultimately, the median frequency is 1.34 μm −1 and 51 grating lines are illuminated. This results in a theoretical resolution limit of Δλ = λ/N = 10.4 nm at nm and 12.4 nm at nm for diffraction-limited performance. In the ray-tracing simulation, all fields were emitted from a point in the centre of the slit at . Because the lenses have a cylinder-like shape, the -direction is not focused. The slight curvature of each dispersed slit image in the footprint spot diagram is ascribed to conical diffraction as a result of tilted rays at the grating surface in the -direction. The lenses were integrated into a printable computer aided design (CAD) model, including an ink basin (Fig. 1c). The central plane at (indicated in red) was investigated wave-optically using the wave propagation method 36,37 (WPM) for plane-wave illumination (Fig. 1d). Around the slit a perfectly absorbing material is assumed. Simulation wavelengths are in the visible range from 490 nm to 690 nm in 40 nm steps. The terminating lens surfaces, e.g. the right edges of the lowest lens block, were designed such that the WPM simulation resulted in a sufficient separation of the zeroth diffraction order from the first order of interest without exceeding the footprint. Accordingly, multiple spectrometers could be printed in close vicinity without crosstalk. The first-order diffraction is visible as being coloured with separated foci, while the non-dispersed zeroth order is initially reflected at the right lens edge and redirected to the right part of the slit image plane, which is well separated from the first-order foci. z Furthermore, the focal length of the last lens was optimised such that the first diffraction order is focused within the working distance of the printing objective in the -direction (< 300 μm total length of the spectrometer), which is also within the high-precision piezo z-stage range of the 3D printer. This restriction enables repeated closely spaced printing of the spectrometers directly onto the sensor surface with consideration of the fabrication modalities. In the next step, the spectrometer was fabricated via twophoton direct laser writing (DLW). Subsequently, the absorbing structure around the slit was realised by the application of a super-fine inkjet process 38,39 (details are given in the Materials and Methods section). The inkjet process and final miniature spectrometer are shown in Fig. 2a. Both the two-photon DLW and the inkjet process were repeatedly applied; thus, an array could be fabricated, as shown in the figure. To quantify the spatial filtering of the slit, only the upper part of the spectrometer was fabricated. For this purpose, the CAD model was cut between the bottom of the ink basin and the collimation lens. The collector lens was illuminated with wavelengths ranging from 490 nm to 690 nm, and the slit was measured with a monochromatic video microscope (Fig. 2b). From the microscope image, an intensity profile was obtained as indicated. The profiles for each wavelength and all normalised and subsequently added profiles are shown in Fig. 2c. The spectral distribution of the profiles reflects the absorption of the photoresist IP-Dip 40 combined with the spectral distribution of the monochromator that was used for the measurement (see Fig. S2). The average slit width at full width half maximum (FWHM) is 1.3 μm, which is in good agreement with the design. To quantify the signal-to-noise ratio, a sinc 2 function with a FWHM of 1.3 μm is subtracted from the measurement data, and the remaining positive signal is regarded as noise. The maximum noise level is 15%, while the average noise level is 5% ± 3%. The miniature spectrometer was experimentally characterised using the setup depicted in Fig. 3. A fibrecoupled monochromator or laser (only for the linewidth measurement) is used as illumination source. The output facet of the fiber is imaged with a 100x microscope objective (focal length mm) to the front of the 3Dprinted spectrometer. This configuration is used for multiple reasons. First, the illumination is concentrated on the region of interest, namely the spectrometer. Second, stray light from outside the 100 × 100 μm 2 footprint is suppressed. In a real-world application, stray light could be shielded by a chromium aperture without tight alignment tolerances at the spectrometer front. Third, the angular extent of the fibre facet emission is increased owing to the magnification of −0.13 and the high NA of 0.9 of the described imaging setup. Accordingly, the angular insensitivity of the spectrometer, that is, the successful spatial filtering, can be demonstrated. y On the microscope side of the measurement setup (Fig. 3b), the dispersed slit image plane is recorded with a 50x microscope objective with an NA of 0.55 combined with a tube lens and a monochromatic image sensor. In the recorded images, the first diffraction order is visible as a tight line that moves in the -direction when shifting the wavelength. In contrast, the zeroth order is stationary at the left end of the recorded images. To measure the spatialspectral response, an intensity profile across the centre of the first-order movement range is recorded, as indicated by the white dashed line. In Fig. 4a, the normalised spatial-spectral response measured at the slit image plane of the spectrometer is shown for a wavelength range from 490 nm to 690 nm in 10 nm steps. The peaks of the single profiles are well separated, and the medium noise level is on the order of 5% to 20%. Each profile can be fitted with a sinc 2 function to eliminate the noise, as depicted in Fig. 4b. The width of the single wavelength profiles underestimates the resolution capacity of the miniature spectrometer because the monochromator-line spectral width has a similar magnitude as the resolution capacity to be measured. tracing simulation data; however, the wavelength shift per micrometre is larger in the measurement than in the simulation. This is presumably due to the shrinkage of the photoresist, which leads to a focus shift towards the last lens block and thus to closer spacing of the wavelengths. The spectral resolution of the miniature spectrometer was assessed with narrow bandwidth illumination sources (green laser at nm and helium neon laser at nm) (Fig. 4e). These two laser wavelengths sample the resolution in the shorter wavelength range ( nm) and the longer wavelength range ( nm) of the total spectral range of our spectrometer. For speckle reduction, a vibrating diffuser was integrated before coupling into the multimode fibre, and multiple measurements were obtained for temporal averaging. At 532 nm illumination wavelength, the measured profile has a centre peak with a comparable and even slightly narrower FWHM linewidth than the profile simulated with the WPM. In exchange, the side lobes are more pronounced. This behaviour may be due the non-uniform illumination of the slit or grating, comparable to the intensity distribution of a Bessel function as the diffraction pattern of a ring pupil. In such a case, the FWHM width is smaller than that of the corresponding airy disc, while the side lobes carry more energy 41 . The resolution at 532 nm is determined from the FWHM of 1.01 μm indicated in the graph. From Fig. 4d, the measurement data of the wavelength range from 500 nm to 560 nm are averaged to calculate the medium wavelength shift per micrometre around the laser wavelength of 532 nm. Multiplication of this value by the FWHM yields a spectral resolution of 9.2 ± 1.1 nm, which is close to the diffraction limit of 10.4 nm and the waveoptical linewidth simulation of 12.5 ± 1.5 nm. The linewidth measurement at the 633 nm wavelength is evaluated accordingly. Here, the medium wavelength shift per micrometre in the range from 600 nm to 660 nm is considered. It yields a spectral resolution of 17.8 ± 1.7 nm at a wavelength of 633 nm. The wave-optical linewidth simulation suggests a slightly better resolution of 14.5 ± 1.4 nm. This deviation could be due to imperfections in the refractive optical surfaces printed with the high-resolution photoresist IP-Dip, which has some deficiencies in terms of smoothness. Finally, the spectrum of an unknown light source (broadband multiple LED light) was recorded and compared to that of a commercial spectrometer (see Fig. 5). Because of the relatively high noise level, an averaged, weighted noise signal was subtracted from all the measured spectra (see the Materials and Methods section and Supplementary Material for further information). A wavelength-dependent calibration factor was determined by the quotient of a known spectrum (halogen lamp) that was measured with both our spectrometer and a commercial spectrometer. The measurement of the unknown light source was multiplied by this calibration factor. The unknown spectrum measurement shows a slightly lower signal in the range of 500-570 nm and a slightly higher signal in the range of 620-670 nm, with overall satisfactory accordance, although it has a footprint eight orders of magnitude smaller. Especially in the lower wavelength range, small features at distinct wavelengths that are visible on the spectrum recorded using the commercial spectrometer, are distinctly perceptible in the measurement obtained using our spectrometer. Discussion The study presented herein proves the feasibility of a fully functional spectrometer fitted in a volume of only 100 × 100 × 300 μm 3 . The size comparison with a standard smartphone camera lens Fig. 6 highlights the potential of this technology. At a size of more than one order of magnitude smaller than the objective, the spectrometer can be integrated almost non-invasively into a smartphone camera, i.e. printed directly onto the camera sensor, such as for single spectral measurements or as individual pixels of a snapshot hyperspectral camera. The feasibility of two-photon 3D printing directly onto image sensors has been demonstrated in previous studies 4,5 . It should be noted that the wavelength shift per micrometre has a value of 9.1 ± 1.1 nm/μm at 532 nm and 7.2 ± 0.7 nm/μm at 633 nm. Therefore, a monochromatic image sensor with a pixel pitch below 1 μm must be used to resolve the spectral lines. Current trends in this direction are evident. For example, Samsung has announced a 47.3 MP sensor with a 0.7 μm pixel pitch (ISOCELL Slim GH1). Moreover, high-resolution monochromatic image sensors have entered the mass market through Huawei Mate 9 smartphone. 1/2 · 200 nm · (1/∆λ 532 + 1/∆λ 633 ) = 16.5 Furthermore, the implementation of a spectrometer fabricated strictly by 3D printing can be highlighted as a novelty. The measured ratio of bandwidth per resolution of is in a margin similar to those of other spectrometers that have been demonstrated in this size range 29,31 . In the category of direct spectrometers, the microspectrometer presented here is the first of its kind in this size range (see Fig. S1). z Nevertheless, our spectrometer has a limitation in that a relatively high noise level can be identified. A possible solution could be the design of an ink basin with improved suppression enabled by an increase in the ink layer that is close to the slit. However, a slit elongated in the direction for improved absorptive ink layer thickness would likely decrease the spectrometer light efficiency. Another part of the noise could originate in the utilisation of the highresolution photoresist IP-Dip, which enables small voxel sizes and is thus well suited for the creation of highfrequency grating structures. However, a deficiency of this material is the roughness of the fabricated surfaces. Kirchner et al. 42 reported the surface roughness of IP-Dip on the order of 100 nm, which makes this photoresist type disadvantageous for refractive lenses. Since diffractive and refractive structures are both part of the spectrometer, the refractive surfaces are compromised. The use of a smoother photoresist, for example, IP-S, which can exhibit an optical-quality root mean square surface roughness that is below 10 nm 14 , could be examined in future work. This photoresist may be beneficial for spectrometers with a larger footprint and lower grating frequencies. Multimaterial printing, as demonstrated by Schmid et al. 3 and Mayer et al. 43 , could resolve these conflicting requirements. Further optimisation of 3D-printing parameters, as proposed by Chidambaram et al. 44 , could lead to smoother surfaces at the expense of printing time. The printing parameters have thus far been optimised with a focus on the slit and grating, which both require high resolution instead of smoothness. As presented in the introduction, our spectrometer is classified in the direct spectrometer category because of its direct spatial-spectral response. However, in combination with a computational approach, the (static) noise can be interpreted as a multiplexed wavelength signal. An iterative reconstruction algorithm, such as that used by Bao et al. 27 , could be employed to sharpen the spectral lines of our spectrometer. In its current configuration, the spectrometer can readily be used as a wavemeter because the centres of the sinc 2 fits yield distinct results in the presence of noise. As a miniature wavemeter, it could be employed as a highly integratable laser wavelength stabiliser. Additionally, the wavelength range of the spectrometer presented here is extendable. 3D printing comes with a high degree of individuality at a comparably low cost. Therefore, an array of slightly different spectrometers could be fabricated, with each being optimised for another part of the spectrum. This is a major advantage with regard to spectrometers fabricated using non-classical approaches, such as colloidal quantum dots 27 , photonic crystal slabs 28 , nanowires 29 , and disordered photonic structures 31 . These methods have either a rather small bandwidth or cannot be intrinsically extended to the infrared region. In our classical grating spectrometer approach, the only limit for wavelengths in the infrared region is either the transmittance of the photoresist or the absorption of the sensor that is used to record the signal. The photoresist IP-Dip has an extinction coefficient well below 0.1 mm −1 in the infrared region up to 1600 nm wavelength 40 . Thus, our approach could readily be employed in this region. Moreover, the design can be based on the same principle, as presented in Fig. 1. By leveraging minor adaptions of surface tilts, the grating phase profile, and re-optimisation of the aspherical surface coefficients, the spectral cut-out could be shifted towards longer wavelengths. For instance, a grid of four individual spectrometers could enable a spectrometer with a wavelength range of 800 nm on a footprint of 200 × 200 μm 2 . Conclusion In this study, the feasibility of a 3D-printed spectrometer in a miniature volume of 100 × 100 × 300 μm 3 was theoretically assessed and experimentally proven. The fabricated spectrometer has a wavelength range of 200 nm in the visible range, and a spectral resolution of 9.2 ± 1.1 nm at 532 nm and 17.8 ± 1.7 nm at 633 nm wavelength. To the best of the authors' knowledge, the presented spectrometer is the first spectrometer with a direct spatial-spectral response that has been demonstrated in this size range. Compared to computational and filterbased approaches, its angular insensitivity and relatively large aperture of 50 × 50 μm 2 are emphasised. A hybrid direct computational approach could significantly improve the average noise level of 5% to 20%. Wavelength range extensions of 200 nm per 100 × 100 μm 2 footprint were proposed in a multi-aperture approach by implementing the same design principle for adjacent parts of the spectrum. Possible applications range from endoscopy devices and consumer electronics to chemical microreactors. Materials and methods x z toroidal (y) The proposed spectrometer was designed using ZEMAX OpticStudio optical design software in the sequential mode for a wavelength range of 490 nm to 690 nm. The glass model that was utilised is the photoresist IP-Dip presented by Gissibl et al. 45 . For ray tracing, the spectrometer was divided into two parts: the light collector (part 1, top lens to slit) and the dispersing imager (part 2, slit to the dispersed slit image plane). All surfaces except the grating surface and the photoresist/air interface behind the slit are described by a toroidal surface with a rotation radius of infinity. The surfaces have a cylinder-like shape with a 50 μm extension in the -direction. The surface sags are defined by where curvature and coefficients are optimisation variables. Part 1 consists of a toroidal surface with coefficients up to the second order followed by a propagation distance of 90 μm inside the photoresist to the slit. In part 2, ray-tracing for all fields starts at the centre of the slit with the same NA as the rays focused by the collector. Accordingly, spatial filtering at the slit is considered, and part 2 can be optimised for the best imaging (minimum spot width per wavelength at ) of the slit plane. After a distance of 4.5 μm, the photoresist/air interface is simulated by a plane to complete the ink basin described later in this section. z grating (y) The following optical surfaces, except the grating surface, are described by Eq. 1 with coefficients up to the tenth order. The grating surface is described by the sequential surface-type elliptical grating 1, and the surface sags are given by T 0 , α, β, γ δ where the optimisation variables are , and . This phase profile was translated into a topography using MATLAB by applying the first-order blazed condition for a wavelength of 550 nm at a fixed incident angle of the chief ray to the field-dependent (chirped) grating deflection angle. Subsequent to the optical design, lens mounts and an ink basin were added to the optimised lens surfaces in the SolidWorks CAD programme. A two-dimensional scalar wave-optical simulation using the wave propagation method 36,37 (WPM) was performed with discretisation of the volume model with nm at for a plane wave normal input that covers the full width of the spectrometer. To approximate the absorbing behaviour of the ink basin, a perfectly absorbing ( ) layer was introduced to the model at the inside edges of the basin. For the results presented in Fig. 1d, a spectrum simulation of 490 nm to 690 nm in steps of 40 nm was performed. Each resulting intensity distribution was normalised to its own maximum value in the observation plane and converted to an RGB representation according to Bruton 46 for display purposes. In the next step, the CAD model was sliced at a distance of 100 nm, hatched strictly in the direction of the grating lines ( -direction) at a distance of 250 nm, and fabricated by means of 3D dip-in two-photon DLW using Photonic Professional GT2 (Nanoscribe GmbH, Karlsruhe, Germany) and the proprietary negative-tone photoresist IP-Dip 44 on a glass substrate with an ITO coating. The laser source of the 3D printer is a pulsed femtosecond fibre laser with a centre wavelength of 780 nm, a specified average laser power output of 120 mW, a pulse length of approximately 100 fs at the source, and an approximate repetition rate of 80 MHz. The spectrometer was printed with 7.5 mW laser power and a scan speed of 15 mm/s. The total printing time of the single spectrometer was just below 2 h. The dispersed slit image plane coincided with the glass surface. The polymerised samples were developed in propylene glycol methyl ether acetate for 8 min to wash out the unexposed photoresist. Subsequently, the sample was rinsed in an isopropanol bath for 2 min and dried with a nitrogen blower. The slit was fabricated with the SIJ-S030 super-fine inkjet printer (SIJ Technology, Inc., Tsukuba, Japan). The ink utilised for the creation of the non-transparent structures was NPS-J (Nanopaste Series, Harimatec, Inc., Georgia, USA). It is a conductive ink that comprises a silver nanoparticle content of 65 mass % with a particle size of 12 nm. When high voltage is applied, the printer can dispense droplet volumes of 0.1 fl to 10 pl from a needle tip with a diameter below 10 μm, which fits into the gap between the collector lens and the ink basin walls. (Details on this fabrication method are provided in Toulouse et al. 38,39 .) This process was applied to fill the ink basin that was defined in the CAD design. The spatial-spectral response measurements were performed with a fibre-coupled home-built monochromator using the spectrum of a dimmable 150 W quartz halogen lamp (model I-150, CUDA Products Corp., Fiberoptic Lightsource, Florida, USA), which was used as the illumination source. The linewidth measurements were performed with a green laser source (532 nm, 5 mW) or red helium neon laser (632.8 nm, 5 mW, model 05-LLP-831, Melles Griot, Darmstadt, Germany), respectively, in combination with a vibrating diffuser coupled into the same fibre as the monochromator (300 μm, NA 0.39, model M69L02, Thorlabs, New Jersey, USA). The output fibre facet was imaged with a 100x objective with NA 0.9 (M Plan Apo HR 100X, Mitutoyo, Kawasaki, Japan) to the front of the 3D-printed spectrometer. The dispersed slit image plane of the spectrometer was recorded with a monochromatic video microscope. The video microscope consisted of a 50x objective with NA 0.55 (50X Mitutoyo Plan Apo, Edmund Optics, New Jersey, USA) in combination with a mm tube lens (MT-4, Edmund Optics, New Jersey, USA) and a monochromatic camera (UI-3180CP-M-GL R2, IDS, Obersulm, Germany). For the linewidth measurement, the video microscope was focused on the minimum detectable linewidth per wavelength, and the simulated linewidth was offset in the -direction for the best fit. Before each measurement and for each wavelength, the camera integration time was adjusted to record a high signal without saturation. For each measurement, 20 images and x 20 profiles in the -direction per image were recorded and averaged after subtraction of a dark image. The slit width measurement was similarly performed; however, instead of the entire spectrometer, only part 1 (collector lens, slit, and ink basin) was fabricated and the video microscope was focused on the slit. Here, the integration time of the camera was adjusted to the overall maximum signal and was the same for all wavelengths. For all spectrum measurements, the noise was evaluated and subtracted (see Supplementary Material for further information). The wavelengths outside the spectrum of our spectrometer were filtered with a long-pass filter (pass > 500 nm) and a short-pass filter (pass < 700 nm) (models FEL0500 and FES0700, Thorlabs, New Jersey, USA). The setup shown in Fig. 3 was used. The measured profile was translated into a wavelength-dependent signal by evaluating the centres of the sinc 2 fits (see Fig. 4b-e). The relative intensity calibration of the spectrometer was conducted with a 150 W quartz halogen lamp (model I-150, CUDA Products Corp., Fiberoptic Lightsource, Florida, USA). A reference spectrum was recorded with a commercial spectrometer (AvaSpec-ULS2048CL-EVO-FCPC, Avantes, Mountain Photonics GmbH, Landberg am Lech, Germany) and was normalised to its maximum intensity. The calibration factor was calculated as . I m,LED,calib (λ) =k (λ) · I m,LED (λ) As a light source for the unknown spectrum measurement, a white light LED (model MCWHLP1 and collimator SM2F32-A, Thorlabs, New Jersey, USA) was coupled into the multimode fibre of the measurement setup. The final spectrum measurement was calibrated as .
v3-fos-license
2017-04-20T03:02:31.578Z
2014-01-21T00:00:00.000
4141439
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0086602&type=printable", "pdf_hash": "80f29acd3759a8c69a7c2de490267bdcd6446d95", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:26", "s2fieldsofstudy": [ "Biology" ], "sha1": "80f29acd3759a8c69a7c2de490267bdcd6446d95", "year": 2014 }
pes2o/s2orc
Effects of Extrinsic Mortality on the Evolution of Aging: A Stochastic Modeling Approach The evolutionary theories of aging are useful for gaining insights into the complex mechanisms underlying senescence. Classical theories argue that high levels of extrinsic mortality should select for the evolution of shorter lifespans and earlier peak fertility. Non-classical theories, in contrast, posit that an increase in extrinsic mortality could select for the evolution of longer lifespans. Although numerous studies support the classical paradigm, recent data challenge classical predictions, finding that high extrinsic mortality can select for the evolution of longer lifespans. To further elucidate the role of extrinsic mortality in the evolution of aging, we implemented a stochastic, agent-based, computational model. We used a simulated annealing optimization approach to predict which model parameters predispose populations to evolve longer or shorter lifespans in response to increased levels of predation. We report that longer lifespans evolved in the presence of rising predation if the cost of mating is relatively high and if energy is available in excess. Conversely, we found that dramatically shorter lifespans evolved when mating costs were relatively low and food was relatively scarce. We also analyzed the effects of increased predation on various parameters related to density dependence and energy allocation. Longer and shorter lifespans were accompanied by increased and decreased investments of energy into somatic maintenance, respectively. Similarly, earlier and later maturation ages were accompanied by increased and decreased energetic investments into early fecundity, respectively. Higher predation significantly decreased the total population size, enlarged the shared resource pool, and redistributed energy reserves for mature individuals. These results both corroborate and refine classical predictions, demonstrating a population-level trade-off between longevity and fecundity and identifying conditions that produce both classical and non-classical lifespan effects. Introduction The evolutionary theories of aging attempt to explain why natural selection would favor the almost ubiquitous evolution of senescence, a process which markedly decreases Darwinian fitness. Classical theory can be subdivided into three distinct yet analogous perspectives -Medawar's ''mutation accumulation,'' Williams's ''antagonistic pleiotropy,'' and Kirkwood's ''disposable soma'' theories [1][2][3]. The ''mutation accumulation'' theory posits that, since extrinsic mortality (e.g., predation and disease) is typically high in the wild, few animals will survive long enough to exhibit senescence and the force of natural selection will decline with age. As such, late-acting deleterious mutations will accumulate over time and passively lead to the development of aging [4]. It should be noted that, while old age is undoubtedly rare for many species (e.g., wild mice) [5], there are notable exceptions (e.g., naked mole rats [6] and blind cave salamanders [7]). Moreover, senescence has been observed in the wild for many species of mammals, birds, other vertebrates, and insects [8]. The ''antagonistic pleiotropy'' theory supposes that senescence evolved due to the active selection of pleiotropic genes, which are beneficial early in life and harmful later in life [9]. The similar yet more mechanistic ''disposable soma'' theory argues that, since resources are often limited and the force of natural selection generally declines with age, an organism that allocates more energy into early reproduction versus longterm somatic maintenance will be more successful. Thus, according to this theory, aging results from a lack of investment in anti-aging mechanisms [10,11]. Furthermore, according to the ''antagonistic pleiotropy'' and ''disposable soma'' theories, evolutionary trade-offs should exist between lifespan and reproduction [1][2][3]. These three classical theories all predict that extrinsic mortality levels should be inversely correlated with evolved lifespan. Nonclassical theories, in contrast, predict that increased extrinsic mortality could select for the evolution of longer lifespans. Field observations have shown that opossums from an insular, lowpredation population exhibit reduced litter sizes and delayed senescence compared to a mainland, high predation population [12]. Daphnia from temporary ponds were observed to experience higher rates of mortality than those from permanent lakes [13]. Concomitant with this, Daphnia from temporary ponds exhibited shorter lifespans and faster juvenile growth compared to those from permanent lakes [14]. In Drosophila melanogaster, a decrease in longevity and earlier peak fecundity can be directly selected for by increasing extrinsic mortality levels [15]. Garter snakes from low extrinsic mortality environments were found to have longer lifespans than those from high mortality environments [16] and annual fish of the genus Nothobranchius from a dry region exhibited shorter captive lifespans than those from a humid region [17]. Furthermore, numerous adaptations that result in decreased extrinsic mortality -such as arboreality, wings, shells, and larger brains -are associated with increased longevity [18,19]. Other evidence comes from a deterministic, computational model developed by Drenos and Kirkwood, which found that higher levels of extrinsic mortality results in the reallocation of energy from somatic maintenance to reproduction [20]. The reverse was also reported in a model published by Cichoń, which found that low rates of environmentally-induced mortality promoted allocation of energy to repair instead of reproduction [21]. It was also shown in a separate theoretical optimization model that the evolution of shorter lifespan in insect workers is favored by risky environments [22]. Although corroborative of a role for extrinsic mortality in driving the evolution of aging, other studies challenge the classical predictions. In guppies derived from natural populations, fish from high predation localities exhibited earlier maturation ages, enhanced swimming performance, and delayed senescence compared to fish from low predation sites [23]. In the nematode Caenorhabditis remanei, increasing random versus condition-dependent extrinsic mortality had differential effects on evolved lifespan: High random mortality selected for a shorter lifespan while high conditiondependent mortality (induced by heat stress) engendered nematodes that were longer lived. No obvious effects on reproduction were observed for nematodes subjected to either type of mortality [24], possibly due to a subpopulation of heat-stressed nematodes benefiting from the death of their neighbors. Auxiliary support comes from a deterministic, mathematical model developed by Abrams, which theorized that, depending on changes in density dependence and age-class specificity of externally introduced mortality, aging may evolve to increase, decrease, or remain unaffected [25]. While the modeling approach reveals that agespecific effects can produce a decreased rate of senescence in response to density-dependent mortality, underlying heterogeneity of populations, population instability, shared energy interdependence, starvation effects, and energy associated with mating were not considered [25]. Indeed, in a recent paper modeling the evolution of aging, population heterogeneity was shown to be an important feature of the population response [26]. These parameters were also not considered in a follow-up, deterministic model by Williams and Day, which showed that the internal condition of an individual can affect selection against physiological deterioration in response to interactive mortality sources [27]. Both modeling and empirical data show that extrinsic mortality can lead to the evolution of either increased or decreased intrinsic mortality. Outside of age-class specificity [25] and the internal condition of an individual [27], the factors that predispose a population to evolve one way or another remain largely unknown. In addition, typically, models of life histories represent homogenous populations under static conditions for which reproduction and survival are modeled by age-dependent differential equations that can be solved to find conditions of optimal net reproductive rates. However, natural populations exhibit features such as finiteness, stochasticity, interdependence, and heterogeneity, allowing for counterintuitive effects. It has been previously shown that guppies subjected to higher levels of predation exhibit delayed senescence [23] and have higher per-capita resource availability than guppies from low-predation localities [28]. We therefore hypothesized that non-classical increases in longevity can naturally arise when energy becomes plentiful due to an increase in extrinsic mortality (i.e., when energy that would otherwise be used by neighbors is freed up for consumption by the remaining population). Furthermore, we hypothesized that this non-classical effect would only arise under specific model conditions, since the majority of the published data suggest that the classical response is much more common. Therefore, we implemented an energy allocation model in the context of a non-stable, finite, stochastic, and interdependent population. To test this hypothesis, we constructed an agent-based computational model in which each individual is subject to shared resource availability, sexual reproduction, intrinsic and evolvable allocation of resources toward repair and maturation, density-dependent death from predation, intrinsic evolvable death times and extrinsic death from starvation. Our modeling approach is powerful because it stems from first-principles about energy allocations and because we are able to model each individual explicitly, allowing us to capture features of natural populations. The incorporation of finiteness, stochasticity, population heterogeneity, and interdependence uniquely distinguishes our model from previously published models investigating the evolution of aging [20][21][22]25,27]. To test which (if any) conditions can lead to the evolution of an increase or decrease in average longevity, we used simulated annealing optimization [29] to determine which global parameters (e.g., food generation rate, maturation efficiency, mating cost and threshold) bias populations toward the evolution of higher or lower intrinsic mortality as a function of extrinsic death. Using this unbiased parameterization approach, we found that longer lifespans should evolve naturally in the presence of increasing predation if the cost of mating is relatively high and if energy is generated in excess. Conversely, we found that the classical, decreased investment in aging repair was most dramatic when mating costs were relatively low and when food was relatively scarce. By employing simulated annealing to identify these conditions, our study largely avoided biases caused by manual model parameter selection. Methods We studied the evolution of aging in populations using a stochastic agent-based iterative computational model subject to randomness associated with foraging, mate selection, mating success, predation, inheritance of intrinsic longevity and maturation times, and starvation. In addition to inherent random fluctuations, populations were allowed to evolve over time as individuals with more favorable energy allocation mechanisms were more likely to pass on their intrinsic longevity and maturation times to their offspring ( Fig. 1). We derive the computational model, and describe the methods we used to solve and fit the model in the sections below. We have included a summary of all model parameters in Table 1. Modeling evolving populations In order to explore how predation-mediated energy availability may bring about the evolution of increased lifespans (non-classical) or decreased lifespans (classical), we derived a stochastic agent-based computational model starting from energy allocation assumptions. To start, we assumed that each individual, i, keeps track of energy level (E(i,t)), age (a(i,t)), gender (g(i)), chosen mate, (mate(i)), as well as an inherited age to maturation (T mat (i)) and inherited intrinsic lifespan (T die (i)). We assumed that available total forage energy (E pool (t)) is kept in a common energy pool and generated at a rate proportional to the population size modifier (N), the adult metabolic energy cost (m), and a starvation modifier (e): We also assumed that the shared energy pool has a maximum capacity of 100 : N : m, representing saturation of the food source in the environment. In order to calculate the flow of energy through the model, we assumed that each individual i, calculates energy costs (C(i,t)) for repair, maturation, and metabolism for each model iteration t: We assumed a creature i grows in size while maturing and the size is proportional to its energy by a factor (m). Newborn individuals start with energy E 0 (equivalent to S 0 : m) and grow by a fraction proportional to its current size until reaching the mature energy level, E mat (equivalent to S mat : m): We then solved for f (i) by substituting the known boundary conditions into (3): Using (3) we calculated the fractional size of an individual at any point in time, s(i,t) as: By assuming that the energy required to grow at each iteration is proportional to the current size, and allowing for an energy conversion modifier (k), we calculated the per-iteration maturation energy cost as: Next, we assumed that aging is caused by non-ideal iterative repair, which incurs a per-iteration cost, proportional to the relative size (s(i,t)) of each individual and a function of the inherited time to death (T die (i)) of each individual: where the time-independent cost function (C(i) repair ) can have one of six forms: While it is possible that the per-iteration cost of aging will change with age, energy availability, gender, and other characteristics, we were unaware of a biologically tested general model for the per-iteration cost of repair. Instead, we modeled the periteration cost as one of six functions of the intrinsic death time, as shown in eq. (8). However, we allowed the repair cost function to vary in order to avoid biasing the behavior of the model toward the evolution of higher or lower intrinsic death times, instead treating the shape of the repair function as an independent variable that may take on linear, sigmoidal, or asymptotic shapes (Fig. S1). Furthermore, we allowed for both a relatively ''cheap'' and a relatively ''expensive'' version of each shape to improve the flexibility of model fitting. Finally, we calculate the energy required for per-iteration metabolic expenditures, which is assumed to be constant and proportional to the current relative size of each individual: Used to scale energy costs for non-mature individuals. T(i) 0 mat Initial maturation age 45 to 55 iteration rounds Set to be initially high to avoid population collapse due to prohibitive costs. Initial intrinsic aging death age 65 to 75 iteration rounds Set to be initially low to avoid population collapse due to prohibitive costs. These stochastic variables are used to randomly adjust the inherited mating time, death time, and foraging rate. P pregnancy Intrinsic probability of pregnancy Parameters optimized by simulated annealing (allowed to vary within a range) Populations were modeled explicitly using stochastic agents to represent individuals subject to shared resources, mating, and both extrinsic and intrinsic death. doi:10.1371/journal.pone.0086602.t001 After calculating the immediate energy expenditures, each individual determines the energy foraged (F (i,t)) from available common food in a ''greedy'' fashion (in random order each iteration round), subject to an intrinsic stochastic foraging capacity (F (i,t) cap ) and an energy storage maximum set to approximately five iterations of maximum foraging (i.e., an individual requires approximately five iterations to fully replenish its energy supply in the absence of energy expenditure): , where a is a uniformly distributed random number from 0 to 1 added to decrease the chance of extinction by starvation which can arise due to the discrete simulation methodology. Constants in eq. (10) were manually selected to ensure that foraging would result in a mean increase in stored energy under typical simulation conditions. We assumed density-independent foraging for several reasons. First, we assumed that energy availability decreases with trophic level (i.e., vegetation is ubiquitous, while predators must search for prey). Second, we wanted to keep the model as simple as possible in order to simplify analyses. Finally, the foraged energy is removed from the common pool of energy, At this point, if E(i,t)ƒ0, the individual was marked as ''starved'' and removed from further simulation (died). In addition to death from starvation, individuals are subjected to density dependent predation. The probability that each individual dies from predation during each simulation round, t, was assumed to be: where x is the predation modifier parameter used to linearly affect the magnitude of predation-dependent death. If the individual was randomly selected to die from predation, it was marked as ''eaten'' and removed from subsequent simulation iterations. Finally, if an individual reached or surpassed its old age limit (T(i) die ƒa(i)), it was removed from the population and marked as dead from intrinsic ''aging'' death. After accounting for intrinsic energy costs and death effect, surviving, mature females were mated with surviving mature males. Since we assumed that the total lifespan of individuals in the model is relatively short (on the order of 100 iteration rounds), we assumed for simplicity that all eligible individuals have sufficient time to find suitable mates during each iteration round. We imposed energy requirement for individuals to engage in mating behavior: both females and males were required to be non-starving (E(i,t).mateThreshold) and females were required to be nongestating (not currently mated). Mating resulted in the expenditure of energy, E mate for both sexes regardless of outcome and represented the energy required to find, attract and/or copulate with a mate. The probability that female i becomes pregnant was defined to decrease linearly as a function of age after maturation according to: where a(i) is the current age, T mat (i) is the maturation age, and T die (i) is the age at which the female will die from old age (intrinsic death time). We assumed that male fertility does not depend on age for simplicity. Each eligible female was allowed to mate with each eligible male in a randomized fashion until she became ineligible (e.g., successful mating or starving), or until no eligible bachelors existed (e.g., all had insufficient energy). Finally, all females that finished gestating (survived since last iteration) gave birth to between one and two offspring (to avoid gender bias, we assumed no cost associated with giving birth). When testing the effect of increasing fertility as a function of age, the probability of pregnancy was set to 0.5 instead of eq. (13), but the number of offspring increased as a function of age according to: For each offspring z, maturation age T(z) mat , and intrinsic death age, T(z) die , were assumed to be inherited from the mother j and father k according to the following linear probability rules: and where b m and b d are random uniformly distributed variables used to mimic mutation and V mat and V die are ''evolvability'' constants that determine the degree of variability of each parameter. Therefore, the intrinsic maturation and death times (and therefore the daily repair and maturation costs) are inherited from both parents and further modified by a random multiplier representing mutation. All offspring start out with age, a(i,t)~0, and equal probability of being female or male. Initially, each simulation is started with equal numbers of newborn males and females. Since simulations include stochastic fluctuations in the amount of energy foraged, predation, and order, we iteratively computed 300,000 simulation iterations following the logic described above and averaged across 400 independent simulations to ensure even sampling. We selected 300,000 iteration steps to ensure that the average inherited maturation and death times stabilized. Optimizing global model parameters with simulated annealing While individuals in each simulation keep track of evolving parameters for the amount of energy they devote to repair and maturation (used to calculate intrinsic T die and T mat ), several simulation invariant parameters can bias the population toward the evolution of an increased or decreased average repair or maturation rate. For example, the amount of energy generated per simulation iteration should affect the average foraging amounts of individuals at a given density. Furthermore, since conclusions drawn from mathematical models may be non-general or subject to parameter bias, we decided to define a range of values for several simulationinvariant parameters and used a simulated annealing approach [29] to identify sets of parameter values that favor the classical (i.e., decrease in repair with increased predation) or non-classical aging strategies (i.e., increase in repair with increased predation). Simulated annealing consisted of simulations using a specific set of simulationinvariant parameters which were evaluated for cases of increasing extrinsic mortality from predation: (x = 0.0025, 0.005, 0.01, 0.02, and 0.04). At the end of each simulation, average T die times were calculated for eight independent simulations, averaging the last 1,000 recorded population average T die times to improve the signal to noise ratio. Next, a score was assigned for the current set of simulationinvariant parameters based on whether we were biasing toward a non-classical: or classical response: where IðÞ is the indicator function, and the score is improved if death times are observed to change in the desired fashion, and especially if the change is monotonic. After scores are calculated, a set of candidate parameters are kept with a probability that depends on the current annealing temperature T (an optimization parameter) and difference in scores according to: Pr(keepjcandidateScore,currentScore,T)m At the end of each annealing optimization round, the temperature is multiplied by 0.95, resulting in increasingly unlikely probabilities of keeping sets of poor parameters at later iterations. The initial and ending temperatures were empirically chosen to be 100 and 0.5, respectively, resulting in 104 annealing rounds. Candidate parameter sets leading to unstable simulations (all eight went extinct) were discarded without temperature changes. At the end of the simulated annealing procedure, the set of parameters that resulted in the highest recorded score were selected for more detailed simulations and analysis. For a set of simulation-invariant parameters and their allowed ranges see Table 1. All java code is available free of charge for download at https://github.com/Shokhirev/EvolutionOfAging. Results To explore which conditions can lead to a non-classical or classical response to increased extrinsic mortality, we implemented a computational model of an evolving population of individuals subject to shared resources, sexual reproduction, extrinsic densitydependent death from predation, extrinsic death from starvation, and intrinsic inheritable somatic repair rates (Fig. 1). Adopting an energy investment model, individuals are subject to a constant, size-dependent metabolic cost, age-independent inherited somatic repair cost, and an age-dependent inherited maturation cost, along with both an energy cost and threshold required for mating. Furthermore, the probability of successful reproduction is assumed to be linearly dependent on the maternal age after maturation with highest and lowest probability of mating fixed at the inherent T mat (maturation age) and T die (longevity) values for each female, respectively. Fig. 1A shows a diagram summarizing our simulation procedure. Fig. 1B depicts typical low-predation population traces, which show characteristic density-dependent birth-death fluctuations. We next wanted to identify simulation parameters which bias populations to evolve higher or lower average intrinsic mortality times using a simulated annealing optimization approach. Simulated annealing reveals conditions favoring classical vs. non-classical responses to predation To identify conditions that result in the evolution of shorter or longer average population lifespans in response to increased predation, we ran simulations with five different values for the predation modifier (x = 0.0025, x = 0.005, x = 0.01, x = 0.02, and x = 0.04). The predation modifier, x, linearly scales the predation probability per individual per simulation iteration, for any given population density. Thus, as the predation modifier is increased, the odds of any individual dying from predation increase for a specific population density. We then used a simulated annealing approach (Fig. 1C) to find values of six simulation-invariant parameters resulting in the evolution of lower average intrinsic death times as a function of increasing predation ( Fig. 2A and 2B, summarized in Table 1). Six parameters were optimized by simulated annealing: food turnover or starvation modifier (e) controls the number of deaths caused by starvation, mating energy (E mate ) is the energy expended during mating, mating energy threshold (mateThreshold) is energy needed to eligible for mating, growth efficiency (k) is the fraction of foraged energy converted to mass, E 0 is the initial energy of newborn individuals, and D type refers to the type of death cost function (sigmoidal, linear, and asymptotic low or high). Specifically, lower average intrinsic death times evolved when energy (food) turnover was relatively low (e = 1.51), when the energy to mate was relatively cheap (E mate = 1.65), and when the energy required for mating was relatively moderate (mateThreshold = 3.58). Next, we repeated the simulated annealing procedure (Fig. 2C and 2D) to determine if combinations of six simulation-invariant parameters can result in an opposite, non-classical effect (i.e., an increase in the average intrinsic time to die, T die , as a function of increasing predation modifier, x). Results show that an increased lifespan can evolve and is favored by relatively abundant food conditions (e = 2.92), and both a relatively costly mating energy threshold (mateThreshold = 4.70) and mating cost (E mate = 3.36). Interestingly, a low growth efficiency, low energy of newborn individuals, and a cheap asymptotic repair cost function were important for both the evolution of a decreased or increased average intrinsic death times (k = 0.052, k = 0.022, E 0 = 0.02, E 0 = 0.007, D type = 2 for both cases, respectively), suggesting that these parameters were primarily important for overall simulation stability and not necessary for producing differential responses to increased extrinsic mortality. Interestingly, the shape of the repair cost function (i.e., how the cost of repair changes with the expected longevity), which was allowed to vary freely between six different types (Fig. S1) during the optimization process, revealed that both the increase and decrease response can evolve under various death cost assumptions (dark dots in Fig. 2B and 2D). However, while a decrease in intrinsic death times was seen for linear, asymptotic, or even sigmoidal cost functions, an increase was only seen with linear and asymptotic intrinsic death cost functions, suggesting that an increased lifespan is more likely to evolve if the increases in maintenance cost do not outpace the increase in lifespan. Increased predation can select for the evolution of shorter lifespans coupled to faster maturation or longer lifespans coupled to longer maturation times After identifying unique conditions impacting the evolution of aging, we analyzed how various statistics were affected by different levels of predation. We refer to classical conditions as conditions which favor a decrease in lifespan in response to higher predation. Conversely, we refer to non-classical conditions as conditions which result in the evolution of an increase in lifespan in response to higher predation. Under classical conditions, rising levels of predation selected for the evolution of shorter mean lifespans (E T die (i) ½ e~0:04 vE T die (i) ½ e~0:0025 ) and earlier maturation ages (E T mat (i) ½ e~0:04 vE T mat (i) ½ e~0:0025 ), as exemplified in Fig. 3A and 3C, respectively. Conversely, non-classical conditions lead to an increase in mean lifespan (E T die (i) ½ e~0:04 wE T die (i) ½ e~0:0025 ) and mean maturation time (E T mat (i) ½ e~0:04 wE T mat (i) ½ e~0:0025 ), as shown in Fig. 3B, and Fig. 3D, respectively, although the increase in average maturation time was less dramatic. The entire distribution of population death times, (Pr T die (i)Da(i) ð Þ ), was shifted toward lower intrinsic death times for the classical condition (Fig. 3E), showing that under classical conditions the entire population reacted by deselecting investment into repair. On the other hand, under the non-classical conditions, heterogeneity in the population intrinsic death times (Pr T die (i)Da(i) ð Þ ) emerged with increasing extrinsic mortality (Fig. 3F), suggesting that population heterogeneity in intrinsic death times arises naturally in response to selective pressures from increased extrinsic mortality. However, high predation conditions leading to the evolution of decreased aging (longer mean T die ) also resulted in population instability, with a large fraction of simulations becoming extinct under high predation conditions (Fig. S2). Interestingly, while only the highest predation condition resulted in drastic extinction for the increase case (Fig. S2B), the highest extrinsic mortality condition resulted in relatively stable populations under the classical conditions (Fig. S2A). In addition, while we assumed that fertility declined linearly with age (the probability of a successful mating decreases in the model), in some species, such as the Blanding's turtle, fertility can increase with age [30]. Therefore, we repeated the simulations but kept the probability of mating constant, while allowing the number of offspring to increase with age, thereby giving a reproductive advantage to older individuals. The observed trend persisted under increased fertility assumptions since increased extrinsic mortality lead to the evolution of longer or shorter lifespans under the non-classical or classical conditions, respectively (Fig. S3). Changes in evolved lifespan are matched by corresponding alterations in energy investment To gain insight into the mechanisms guiding these evolutionary changes, we investigated how different levels of predation impacted statistics related to population size as well as energy relocation and availability. Under classical conditions and rising values of extrinsic death, juveniles, who are subject to maturation costs in addition to metabolic and maintenance costs, invested relatively less energy in somatic maintenance (Fig. 4A), relatively more energy in early maturation (Fig. 4B), and relatively less energy in metabolism (Fig. 4C). Higher predation also caused mature individuals to invest less energy in somatic maintenance (Fig. S4A) and instead increase energy investments in mating and reproduction (Fig. S4B). These data show that, under conditions which select for the evolution of shorter lifespans in response to increased extrinsic mortality, individuals evolve to siphon energy away from somatic maintenance in order to invest in earlier maturation and mating. Under non-classical conditions, however, higher levels of predation caused juveniles to invest relatively more energy into somatic maintenance (Fig. 4D) and relatively less energy into early maturation (Fig. 4E). Comparable fractional energetic investments in metabolism were observed regardless of the value of the predation modifier, x (Fig. 4F). Likewise, larger values of extrinsic mortality caused mature individuals to devote more energy to somatic maintenance (Fig. S4C). On the other hand, mature energy investment in mating and reproduction were reduced (Fig. S4D). Corroborative of the data presented in Figure 3, nonclassical conditions encouraged individuals to reallocate energy away from early peak fertility and mating, instead investing in greater longevity. We next analyzed how higher levels of predation affected parameters related to population density. In the model, density is defined as the current population size divided by the population size modifier, N, and affects food availability and predation probability. Under classical conditions, increased extrinsic mortality reduced population size (Fig. 5A), enlarged the total pool of energy that could be foraged (Fig. 5B), increased the birth rate (Fig. 5C), and decreased the average energy reserves of mature individuals (Fig. 5D). Furthermore, we show that the perindividual probability of being predated on average increased with increasing predation modifier (Fig. S5A). Together these findings suggest that, although extra energy is made available to survivors of higher predation, the average individual has less energy and energy is predominantly used for reproduction, leading to faster population turnover. Under non-classical conditions where individuals on average evolve longer lifespans in response to increased extrinsic mortality, population size also decreased dramatically in response to greater extrinsic mortality (Fig. 6A) and the shared energy pool was never depleted (Fig. 6B). The birth rate decreased (Fig. 6C), and interestingly, the average energy of mature individuals increased with higher predation (Fig. 6D). Thus, under non-classical conditions, survivors of higher predation populations devoted less energy into reproduction and enjoyed increased energy levels, despite the aforementioned extra investments in somatic maintenance. In addition, the evolution of energy investment away from maturation actually resulted in a decreased average probability of predation with increasing predation modifier (Fig. S5B). These are the key differences between the classical and non-classical outcomes, as survivors of conditions leading to a classical decrease in maintenance had less energy under higher levels of predation and were on average more likely to die from predation during each simulation iteration. Discussion It is becoming increasingly evident that classical predictions of aging cannot always explain trends observed in natural populations, with individuals from some natural populations exhibiting longer lifespans in response to increased extrinsic mortality [31][32][33][34]. Assuming equal susceptibility to extrinsic death factors, and infinite energy, individuals with the highest reproductive capacity (i.e., longest cumulative reproductive lifespan) will outcompete their neighbors and quickly become dominant. However, we know that aging evolved under stochastic energy-limiting conditions with underlying costs associated with faster maturation and somatic repair. Therefore, we decided to start from first principles, developing a relatively simple agent-based energy allocation computational model in which the maturation and intrinsic death times were inherited traits requiring devotion of limited energy resources foraged from a shared energy pool. Using an unbiased, simulated annealing approach and a stochastic, agent-based model (Fig. 1), we show that increased lifespan can evolve naturally in response to increased extrinsic predation pressure under specific energy-allocation conditions (Fig. 2). We demonstrate that increased extrinsic mortality can have differential effects on evolved lifespan and maturation age (Fig. 3). These changes are accompanied by corresponding reallocations of energy between somatic maintenance and early peak fertility (Fig. 4 and S4). Specifically, our optimization approach revealed that the evolution of decreased average longevity occurs when both food production and mating costs are low ( Fig. 2A and 2B). Dissecting the individual energy allocation under these so called ''classical'' conditions showed that juveniles evolved to invest relatively more energy into maturation compared with somatic maintenance and metabolism ( Fig. 4A-C) and mature individuals devoted themselves to reproduction, at the cost of somatic repair (Fig. S4A and S4B). Since reproductive success is tied to the number of energetically costly mating attempts, this would result in earlier and more frequent reproductive success and subsequently a faster population turnover. Although average population size shrinks and the total energy pool for foraging is increased with increasing predation modifier, the average individual has less energy due to the extra investments into reproduction (Fig. 5A-D), and is subject to a higher per-iteration probability of death from predation (Fig. S5A). On the other hand, when optimizing for longevity increases, our simulated annealing approach revealed that longer lifespans naturally evolve when food production is relatively high and when mating incurs a significant energetic investment (Fig. 2C and 2D). Under these so called ''non-classical'' conditions, energy devoted to longevity is increased solely at the cost of significantly longer maturation times (Fig. 4D-F). Higher levels of predation again decrease population size and increase the total shared amount of resources, but total investments in reproduction evolve to be lower, the average individual has more energy (Fig. 6A-D and Fig. S4C and S4D), and the per-iteration predation probability decreases slightly (Fig. S5B). In other words, since reproduction is prohibitive, individuals require time to amass significant energy reserves before reproducing. To ensure that they do not die from intrinsic causes while they accrue the necessary energy, investment in repair is significantly increased, and investment in fast maturation becomes unnecessary. At the same time, since reproduction becomes less frequent, the overall population remains relatively sparse. This low density actually leads to an overall decrease in per-iteration predation probability, further increasing the benefit of the evolved longer lifespans. Finally, we briefly explore the effect of an increasing fertility pattern by showing that the observed pattern of the evolution of increasing or decreasing longevity holds without the need for reoptimization, when the number of offspring increases with age (Fig. S3). It was previously shown by Reznick et al. that guppies from high-predation localities exhibit delayed senescence as well as enhanced fecundity and swimming performance [23]. Relevant to our model, the authors discuss two theories for how rising predation could select for the evolution of belated senescence. The first is that changes in density regulation may indirectly benefit individuals by creating a surfeit of resources for the survivors. Experimental work has lent credence to this theory, finding that guppies subjected to elevated levels of predation do indeed have, on average, higher per-capita resource availability [28]. This is consistent with our findings, which demonstrate that rising levels of predation decrease population density and increase the total energy pool that can be foraged (Fig. 5 and 6). Moreover, our simulated annealing optimization identified food availability as a critical factor underlying the disparate effects on evolved lifespan, with low and high food turnover being linked to decreased and increased lifespan, respectively (Fig. 2). Survivors from predation enjoyed access to an enlarged energy pool under both increase and classical conditions, however, indicating that resource availability alone is likely not sufficient to drive the evolution of shorter or longer lifespans. Furthermore, a critical difference between classical and non-classical conditions was that the average individual had less energy under classical conditions (Fig. 5D) while the average individual had more energy under non-classical conditions (Fig. 6D). While increased extrinsic mortality ensured that all surviving individuals were always satiated under both types of conditions, a relatively cheap mating cost coupled with the increased investment in earlier maturation actually resulted in lower average energy levels of mature individuals under the classical conditions. On the other hand, non-classical conditions prompted mature individuals to stockpile their energy reserves in ''anticipation'' of mating. The second theory discussed is that differences in reproduction may account for how an individual chooses to allocate its energy in response to an extrinsic source of mortality. Most vertebrates, such as mice and humans, exhibit a hump-shaped fecundity curve, peaking sexually shortly after maturation and then suffering a slow, gradual loss in reproductive efficiency [31]. This is not ubiquitous, however, as numerous species have been identified that have anomalous sexual trajectories. The Blanding's turtle, for example, exhibits a low reproductive potential and a reproductive output that increases gradually with age [35]. Naked mole rats suffer no obvious decline in fecundity over time and have atypical, eusocial mating habits centered on a single queen. Violent squabbles between females vying for queen status are not uncommon and can last for weeks. When a new queen takes over, she cannot reproduce until her body undergoes significant changes and, when she does, she will exhibit an exceptionally long gestation period of approximately 70 days [36][37][38]. As such, we would tentatively suggest that ''mating costs'' or the effort required to mate may be abnormally high in naked mole rats. Interestingly, both the Blanding's turtle and the naked mole-rat are remarkably long-lived for their body sizes and display signs of negligible senescence [30]. Of the three parameters which differed significantly under classical and non-classical conditions, two dealt directly with reproduction -mating energy and energy mating threshold. Mating energy is the energy expended during mating and energy mating threshold is the energy needed to be eligible for mating. Conditions which favored the evolution of shorter lifespans were characterized by cheap mating costs and moderate energy mating thresholds. Comparatively, conditions which favored the evolution of longer lifespans were marked by expensive mating costs and high energy mating thresholds (Fig. 2). Of note, while our model assumes a more typical decline in fecundity with age, the pattern of increased lifespan in response to increased extrinsic mortality under the non-classical conditions also holds if fertility increases with age (Fig. S3). These data demonstrate that, in response to increased predation, the energetic requirements associated with reproduction are a paramount determinant of whether a population will more likely follow the classical or nonclassical evolutionary path. In summary, our agent-based modeling results suggest two evolutionary strategies for dealing with increased age-independent extrinsic mortality: the classical response involving faster maturation and faster aging, and the non-classical response involving delayed maturation and delayed aging. These findings are congruent with and build upon the disposable soma theory, which posits that trade-offs between reproduction and somatic maintenance underlie evolved changes in aging. Furthermore, our results corroborate the wealth of data in opossums [12], Daphnia [13,14], guppies [23,28], nematodes [24], flies [15], and other organisms which suggest that extrinsic mortality can drive evolutionary changes in lifespan. The predictions made by our model also have important implications for life-history theory and aging. Increased . Classical and non-classical conditions identified by simulated annealing optimization. A simulated annealing optimization scheme was used to find values for six simulation-invariant parameters that would predispose populations toward either increased or decreased maintenance in response to increased extrinsic mortality. The fit score stochastically improved over the course of the optimization (A and C). The optimal starvation modifier (e), growth efficiency (k), initial energy of individuals (E 0 ), mating energy (E mate ), mating energy threshold (mateThreshold), and death cost function type (D type ) for the classical (B), and non-classical (D) effect are shown as a function of optimization duration. D type : 0 = Sigmoidal Low, 1 = Linear Low, 2 = Asymptotic Low, 3 = Sigmoidal High, 4 = Linear High, 5 = Asymptotic High. Colored dots indicate that the intrinsic death effect was monotonic. doi:10.1371/journal.pone.0086602.g002 investment in somatic maintenance is predicted to evolve under increasing extrinsic mortality conditions if mating is considered to be energetically demanding (e.g., long mate search times, maintenance of large territories, or courtship behaviors). Furthermore, the evolved intrinsic death times are expected to be highly variable (Fig. 3F), with significantly longer lifespans expected of a sub-population of individuals (although the shape of the repair cost function may affect the variability). This is likely the case because, as reproduction is relatively costly, significant time would be required to obtain the energy reserves required for mating. Large amounts of resources would help ensure that energy is always available and, despite the high levels of predation, further investments in somatic maintenance would allow these individuals to accumulate the necessary energy before dying of intrinsic causes. At the same time, the reduced population turnover leads to decreased population density and lower predation probabilities (Fig. S5). In other words, animals that live longer can accumulate additional energy reserves and reproduce more than those that do not. Investment in fast maturation therefore becomes expendable and investments in delayed aging take precedence. A similar concept was presented by Shanley and Kirkwood [39]. By using a mathematical life-history model, the authors showed that periods of famine can cause individuals to shift resources away from reproduction and toward somatic maintenance, since juvenile survival was lower when resources were scarce. This allowed individuals to survive famines and, when resources were abundant, avail themselves of the newfound energy and engage in reproduction. The authors theorized that this may be the evolutionary basis of caloric restriction and its lifespan-extending effects [39]. Alternatively, if mating is relatively cheap and if sources of energy are scarce, a better evolutionary strategy is to forgo somatic maintenance in lieu of earlier peak fertility as suggested by classical theories of aging. Since the odds of surviving long enough to exhibit signs of aging are so low, individuals that mature earlier under these classical conditions of high predation will be able to produce more offspring than those that mature at later dates. Since resources are limited and mating is fairly affordable, investments in early fertility, as the mutation accumulation, antagonistic pleiotropy, and disposable soma theories would suggest, take precedence over investments in somatic maintenance. Interestingly, the distribution of intrinsic death times under classical conditions is still expected to be approximately normal or log-normal, with a significant coefficient of variation (Fig. 3E), suggesting that variability in intrinsic death times naturally arises and stabilizes in populations. However, this variability is still relatively low compared to the variability in death ages that evolved under non-classical conditions (Fig. 3F). Particularly supportive of the disposable soma theory, both our classical and non-classical conditions reveal a trade-off between somatic maintenance and reproduction. Our model also shows, however, that the paradigmatic prediction regarding increased extrinsic mortality is incomplete, as additional factors can dictate the evolutionary response to higher rates of externally-induced death. In addition, our results suggest that increased extrinsic mortality lead to a higher probability of population extinction in a non-linear fashion under non-classical conditions (Fig. S2B). However, under classical conditions, high predation (x = 0.04) lead to stable simulations in over 99% of the 400 independent simulations. While an in-depth analysis of population stability in the context of the evolution of aging is outside the scope of this work, it is possible that the classical response to extrinsic mortality pressures results in a more stable long-term solution, compared to the ''slow-andsteady'' non-classical response, which works by investing in the individual. This may be especially true during periods of highly fluctuating environmental conditions (as reviewed in [40]). Similarly, we notice that the force of selection declines as the population intrinsic death and maturation times approach their respective equilibrium population distributions (Fig. 3 E,F). This can be readily seen by the rapid decrease in the rate of change of population average statistics with simulation iteration (i.e., mean T die and T mat in Fig. 3 A-D). This rate differs between the classical and non-classical conditions (equilibration by ,50,000 iteration in the classical cases as seen in Fig. 3A,C, and longer for the non- Under classical (A-C) and non-classical conditions (D-F), the percentage of per-iteration energy devoted to somatic maintenance, reproduction, and metabolism by juveniles is shown. In both cases, the majority of energy was devoted to reproduction, followed by metabolism, followed by somatic maintenance (A-F). Under classical conditions, rising levels of predation, x, caused juveniles to invest less in somatic maintenance (A), more into early peak fertility (B), and less into metabolism (C). Under non-classical conditions, larger values of x caused juveniles to devote less energy to early peak fertility (E) and more towards somatic maintenance (D). Investments in metabolism were comparable for various values of predation modifier, x (F). doi:10.1371/journal.pone.0086602.g004 classical case as seen in Fig. 3B,D), highlighting the increased ''evolvability'' of population under classical conditions [41]. Fluctuations about the equilibrium point are also noticeably higher in the non-classical case. This can be explained in part by the shape of the T die population distributions, which are narrowly distributed about a mean value for the classical case (Fig. 3E), but highly skewed in the non-classical case at higher values of the predation modifier (Fig. 3F). This implies that in the non-classical case, individuals in the model are subject to varying selective forces, which may allow for higher population robustness in the face of very rapid changes, and further stresses heterogeneous modeling approaches. While our modeling approach includes parameters for maturation, mating, metabolism, and maintenance and is able to capture population heterogeneity and stochastic effects, many parameters still remain to be explored in future studies. For example, seasonal fluctuations in shared energy generation may further affect the evolution of non-classical responses to increased extrinsic mortality. Also, for simplicity we assumed that extrinsic mortality is not age-dependent, although certain age groups may be more susceptible to predation than others [25,27,42]. Likewise, individuals in a population may not be equally susceptible to an extrinsic mortality source. Pathogens, for example, are less of a mortality hazard for individuals with strong immune systems than those with weaker immune systems. Although some individuals may be more capable of dodging predators than others, predation is likely to affect individuals more haphazardly than pathogens [43]. Moreover, in an antagonistic pleiotropy model developed by Williams and Day, the authors demonstrate that selection against physiological deterioration in response to interactive mortality sources can vary with age as well as an individual's internal condition [27]. Relevant to this, heat stress in nematodes engenders the evolution of longer lifespans [24]. Heat stress likely selects for individuals that have abnormally high thermotolerance and this stress resistance may be coupled to other factors that promote longevity. These factors, in addition to the factors we incorporate in our model, could introduce another level of complexity and detail to future analyses. Ideally, our theoretical findings would be validated experimentally be empiricists. For example, key predictions of our model are that resource availability and reproductive costs largely determine how aging evolves in response to increased predation. Our model optimization predicts that longer lifespans will evolve in the presence of relatively high resource availability and when mating is relatively costly. In contrast, we find that shorter lifespans evolve when food is relatively scarce and reproduction is relatively cheap. Longer lifespans are associated with later maturation ages and, conversely, shorter lifespans are associated with earlier maturation ages. Numerous studies could be designed to test these predictions. For example, we predict that populations growing under foodlimiting conditions are more likely to evolve shorter intrinsic lifespans in response to increased random mortality, if the cost of reproduction were relatively cheap. On the other hand, we would predict that a population of individuals that must invest a significant proportion of their energy into acquiring mates (e.g., development of sexually attractive characteristics, courtship, or mate search) to evolve longer lifespans if food is plentiful and the environment is relatively stable. While controlling food availability is relatively straight forward, various strategies could be employed to adjust the costs associated with mating. Mutations known to enhance or impede reproduction could be introduced or, alternatively, populations could be grown in such a way that the probability of male and female interaction is greatly reduced and/ or requires higher energetic costs (e.g., by increasing the mate search time). Organisms with different mating efficiencies could also be collated in response to increased extrinsic mortality. Aging is a multifarious process marked by epigenetic alterations, telomere shortening, genomic instability, exhaustion of stem cells, defective proteostasis, and other complex and interacting factors. It is also believed to underlie a plethora of age-related ailments such as cancer, Alzheimer's disease, and diabetes [44][45][46][47]. The evolutionary theories of aging have guided aging research for decades and shape how we view senescence as well as the feasibility of therapeutic interventions for age-related damage [48]. Further efforts at demystifying the evolutionary basis of this phenomenon are therefore critical to truly understanding its underlying mechanisms as well as for developing preventative and rejuvenative treatments for its associated ailments. Figure S1 Graphical depictions of various repair cost functions over time. The shape of the repair function was allowed to change during the simulated annealing optimization. The repair function determines the per-iteration energy cost for maintaining a specific T die (i). Per-iteration repair energy costs are shown for ''low'' and ''high'' sigmoidal, asymptotic, and linear cost repair functions as detailed in eq. (8). (TIF) Figure S2 Different values of predation modifier more commonly caused population extinction then others. The probability of survival, as a function of time, is shown for differently predated populations under classical (A) and nonclassical (B) conditions (n = 400). Under classical conditions, a value of 0.02 for predation modifier, x, was most correlated with extinction events (A). Under non-classical conditions, a predation modifier of 0.04, followed by 0.02, was most commonly associated with population extinction. (TIF) Figure S3 Classical and non-classical effects were qualitatively unchanged when fertility increased with age. To test if an increasing fertility with age could lead to the evolution of increased lifespans, simulations were carried out where individual probability of successful mating was ageindependent, but the number of offspring increased linearly with Figure S5 Per-iteration predation rates evolved under classical and non-classical conditions. Populations subjected to increased extrinsic mortality evolved increasing (A) or decreasing (B) per-iteration probability of death by predation under classical or non-classical conditions, respectively. Graphs show the per-iteration probability of predation as a function of predation modifier, x, and is normalized for population density. (TIF)
v3-fos-license
2022-03-11T16:26:57.051Z
2022-03-01T00:00:00.000
247379059
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/11/6/1501/pdf", "pdf_hash": "c3ec205f04ba5099cb8caa76d68d240a0c984412", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:27", "s2fieldsofstudy": [ "Medicine" ], "sha1": "195dc757fa162ce009a7d27ab630afd6f1948f14", "year": 2022 }
pes2o/s2orc
Retinal Vessel Density in Age-Related Macular Degeneration Patients with Geographic Atrophy We compared the retinal vessel density and inner retinal thickness in patients who had one eye with geographic atrophy (GA) and a fellow eye with intermediate age-related macular degeneration (iAMD). The vessel density from the superficial vascular complex (SVC) and deep vascular complex (DVC) through optical coherence tomography angiography and the thickness of the nerve fiber layer, ganglion cell–inner plexiform layer (GCIPL), inner nuclear layer (INL), outer nuclear layer (ONL) on a structural optical coherence tomography thickness map were measured in 28 eyes of 14 GA patients with iAMD in the fellow eye. GA eyes had significantly lower vessel density in the SVC (26.2 ± 3.9% vs. 28.3 ± 4.4%; p = 0.015) and DVC (24.2 ± 2.6% vs. 26.8 ± 1.9%; p = 0.003) than fellow eyes (iAMD). GCIPL and ONL were significantly thinner in GA eyes than in the fellow eyes (p = 0.032 and 0.024 in the foveal areas, p = 0.029 and 0.065 in the parafovea areas, respectively). Twenty-four eyes of 12 patients were followed up for 2 years and seven of the fellow eyes (58.3%) developed GA during the follow-up period and showed reduced vessel density in the SVC (26.4 ± 3.0% vs. 23.8 ± 2.9%; p = 0.087) and DVC (25.8 ± 2.2% vs. 22.4 ± 4.4%; p = 0.047) compared to baseline. Vessel density and GCIPL thickness map measurements are potential GA markers in non-neovascular AMD. Introduction Advanced age-related macular degeneration (AMD) is the leading cause of irreversible vision loss in developed countries [1]. While early stages of AMD present with drusen and pigmentary changes, advanced AMD includes geographic atrophy (GA) characterized by retinal pigment epithelium (RPE) and photoreceptor loss or the development of choroidal neovascularization (CNV) in neovascular (exudative) AMD [2]. RPE cell degeneration accompanied by the accumulation of drusen (lipo-glyco-proteinaceous deposits) is classified as early or intermediate AMD (iAMD) based on drusen size (i.e., small drusen <63 µm or intermediate druse n = 63-125 µm vs. large drusen ≥125 µm) [3]. GA has traditionally been defined on color fundus photographs as a discrete area of RPE atrophy measuring at least 175 µm in diameter observed together with photoreceptor loss [4]. Recently, the Classification of Atrophy Meetings (CAM) group termed GA as a subset of complete RPE and outer retinal atrophy without associated CNV: (1) a region of hyper-transmission of at least 250 mm in diameter, (2) a zone of attenuation or disruption of the RPE of at least 250 mm in diameter, and (3) evidence of overlying photoreceptor degeneration, in absence of signs of an RPE tear [5,6]. Although GA primarily affects the outer retina including photoreceptors and RPE, studies have previously demonstrated secondary inner nuclear layer and ganglion cell loss in addition to outer retinal atrophy [7][8][9]. Recently, reduced retinal vessel density in both the superficial and deep layer capillaries 2 of 12 was reported in eyes with GA which is thought to result in reduced inner retinal metabolic demand in the GA area [10]. However, the reduced flow in the retinal capillaries was also often present in eyes with iAMD more than in age-matched controls [11,12]. To the best of our knowledge, there has been no study comparing the retinal blood flow in eye with GA and iAMD. Considering the AMD is a progressive retinal degenerative disease in which patients advance from the intermediate stages to GA, hypothetically there may be difference in the retinal blood flow between eye with GA and iAMD. Optical coherence tomography (OCT) angiography (OCTA), which has recently been used clinically, has the advantages of being noninvasive, depth-resolved, and able to repeatedly measure the retinal capillary vessels [13]. OCTA scans can be obtained at the same location at different time points and can provide quantitative metrics of the retinal microvasculature of patients across consecutive visits [14]. However, those quantitative metrics from OCTA might be affected by systemic condition (including age, sex, hypertension, diabetes) and ocular biometry [13,15]. In this study, to minimize the confounding factors in analysis of retinal vessel density using OCTA, we compared the retinal vessel density and inner retinal thickness of both eyes in patients with one eye GA and fellow eyes with iAMD using OCTA. In addition, we sought correlations between retinal vessel density and size of GA. Patient Selection This study was conducted after obtaining approval from the Institutional Review Board of Guro Hospital, Korea University, and adhered to the tenets of the Declaration of Helsinki. The medical records of patients who were diagnosed with GA in one eye and iAMD in the fellow eye at Guro Hospital of Korea University between 1 March 2015 and 31 August 2021 were retrospectively analyzed. The inclusion criteria included age >70 years and GA in the unilateral eye and iAMD in the fellow eye characterized by soft drusen larger than 125 µm within two disc diameters of the fovea and pigmentary changes. GA was defined following CAM report 3, as lesions of complete RPE and outer retinal atrophy that showed a region of hyper-transmission of at least 250 µm in length, a zone of attenuation or disruption of the RPE of at least 250 µm in length, overlying the ellipsoid zone degeneration on spectral-domain OCT imaging (Heidelberg Engineering, Heidelberg, Germany) without signs of RPE tear and CNV [5]. We included GA within the Early Treatment Diabetic Retinopathy Study (ETDRS) circle (6 mm) centered on the foveola to include the location of the GA in the OCTA scan. The iAMD classification was adopted based on the proposal by Ferris et al. in 2013 as follows [3]: patients with large drusen (>125 µm) or with pigmentary abnormalities associated with at least medium drusen (63-125 µm) should be considered to have iAMD. Patients with neovascular AMD, small drusen measuring less than 63 µm, any history of anti-vascular endothelial growth factor (VEGF) injection, any past vitreoretinal surgery, or cataract surgery within six months of analysis; any maculopathy secondary to causes other than AMD (e.g., presence of diabetic or cystoid macular edema, epiretinal membrane, macular hole, or vitreomacular traction syndrome); and eyes with myopia greater than −3.0 D were excluded. Patients with ocular hypertension (intraocular pressure >21 mmHg), glaucoma, retinal degenerative disease (e.g., retinitis pigmentosa, cone-rod dystrophy, stargardt disease), or diabetic retinopathy were also excluded. All patients underwent a complete examination, which included best-corrected visual acuity (BCVA), slit-lamp examination, intraocular pressure, and fundus examination. Image Acquisition and Analysis Blue-light FAF imaging was performed using a confocal scanning laser ophthalmoscope (Spectralis HRA+OCT; Heidelberg Engineering, Heidelberg, Germany), with a scan angle of 30 • × 30 • (8.7 × 8.7 mm). Structural OCT and OCTA images were obtained using a spectral-domain OCT device (Spectralis OCT2; Heidelberg Engineering, Heidelberg, Ger-many). OCTA images were obtained as 4.3 × 4.3 mm (20 • × 20 • ) angiography scans, using 384 B-scan images (high-speed mode), with a wavelength of 870 nm, lateral resolution of 11.64 µm/pixels, and axial resolution of 3.87 µm/pixels. OCTA images with artifacts, such as shadows, double-vessel patterns, or horizontal movement lines, were excluded from the analysis [16]. Low-quality OCTA scans that were out of focus or with media opacity, tilted scan with uneven OCTA signal, and abnormal fixation showing eccentric foveola on OCTA scans were also excluded. The retinal vascular density was determined based on the projection-artifact removed OCTA images of the superficial vascular plexus (SVC), from the internal limiting membrane (ILM) to the inner surface of the inner plexiform layer (IPL), and the deep vascular plexus (DVC), from the outer surface of the IPL to the outer surface of the outer plexiform layer (OPL) [17]. In the case of segmentation errors, the boundary line of each B-scan image was manually adjusted by checking and refining the segmentation lines (ILM, IPL and OPL) in each structural OCT (Figure 1a). To avoid bias in manual segmentation, the OCTA flow signal was minimized to 0% and the readers adjusted the line by checking the structural OCT ( Figure 1a). Vessel density was calculated as the ratio of pixels occupied by the vessels/all the pixels in the OCTA image [18] using ImageJ software (version 1.51; National Institutes of Health, Bethesda, MD, USA) after binarization using the Otsu auto-thresholding approach, which calculates optimum threshold by minimizing intraclass variances and maximizing interclass variances (Figure 1b,c) [19,20]. Although a gold standard for conducting binarization of retinal vessel density has not been proposed yet, Otsu's method was suggested to have high repeatability in the binarization of retinal vessel density [21]. The central 0.6-mm diameter area centered on the fovea was excluded to mitigate the effect of the foveal avascular zone on vessel density measurements [22], as there is possibility of underestimating vessel density calculation in eyes with geographic atrophic eye, which did not affect the foveola. An automated algorithm based on a directional graph search was used to segment the volumes in the HEYEX software program (Thickness map, Heidelberg Engineering, Heidelberg, Germany) [23]. The thickness of the inner retina and outer nuclear layer (ONL) was measured using a thickness map divided into nine areas of the ETDRS circle. In this study, the analysis included the central ring within 1 mm of the fovea, parafoveal ring (1-3 mm), and perifoveal ring (3-6 mm). Inner retinal layer thickness measurements were defined by the nerve fiber layer (NFL), from the ILM to the outer surface of the NFL, the ganglion cell-inner plexiform layer (GCIPL), from the inner surface of the ganglion cell layer (GCL) to the inner surface of the inner nuclear layer (INL), and by the INL. ONL thickness were measured from the outer surface of the OPL to external limiting membrane (ELM). Segmentations were reviewed in each structural OCT comprising the thickness map and manually adjusted by checking and refining the segmentation lines (ILM, NFL, GCL, INL, OPL and ELM) in each structural OCT to ensure their accuracy. Measurement of GA size in the GA group. To assess the size of the GA on FAF images, the Regionfinder software (version 2.6.4.0; Heidelberg Engineering, Heidelberg, Germany), which automatically delineates atrophied lesions with hypo-autofluorescence ( Figure 2), was used. Two ophthalmologists (S. H. and M. C.) measured the GA size, retinal vessel density, and retinal thickness independently of one another, and the averages of their reported values were used for analysis. To reduce the influence of baseline GA area on measured GA growth, square root transformation of GA area was implemented [24]. Follow-Up Analysis Patients were followed up every three to six months. In patients with 24-month followup data (±3 months) from the baseline, we compared the retinal vessel density and inner retinal thickness in GA eyes at baseline and last follow-up (24-month) to elucidate GArelated changes. Of the 14 patients, 12 were able to participate in the 24-months follow-up. During the follow-up period, the development of GA in the fellow eye was also observed, and the retinal vessel density of fellow eye with iAMD at last follow-up was compared with that of the baseline. An automated algorithm based on a directional graph search was used to segment the volumes in the HEYEX software program (Thickness map, Heidelberg Engineering, Heidelberg, Germany) [23]. The thickness of the inner retina and outer nuclear layer (ONL) was measured using a thickness map divided into nine areas of the ETDRS circle. In this study, the analysis included the central ring within 1 mm of the fovea, parafoveal ring (1-3 mm), and perifoveal ring (3-6 mm). Inner retinal layer thickness measurements were defined by the nerve fiber layer (NFL), from the ILM to the outer surface of the NFL, the ganglion cell-inner plexiform layer (GCIPL), from the inner surface of the ganglion cell layer (GCL) to the inner surface of the inner nuclear layer (INL), and by the INL. ONL thickness were measured from the outer surface of the OPL to external limiting membrane (ELM). Segmentations were reviewed in each structural OCT comprising the thickness map and manually adjusted by checking and refining the segmentation lines (ILM, NFL, GCL, INL, OPL and ELM) in each structural OCT to ensure their accuracy. Measurement of GA size in the GA group. To assess the size of the GA on FAF images, the Regionfinder software (version 2.6.4.0; Heidelberg Engineering, Heidelberg, Statistical Analyses Data are presented as means ± standard deviation (SD). Prism 7 (GraphPad Software Inc., San Diego, CA, USA) was used for the statistical and graphical analyses. The Shapiro-Wilk test was used to test the normality of the data distribution of visual acuity, retinal layer thickness, size of GA, and vessel density of both eyes. As these values passed the normality test, an independent t-test was used to compare visual acuity, retinal thickness, and vessel density between each eye in an individual. A Pearson correction coefficient test was used to analyze the correlation between the size of the GA and retinal vessel density in the GA group. The paired t-test was used to analyze the progression of thickness and vessel density changes between baseline and follow-up in each eye. The Wilcoxon signed rank test was used in subgroup analysis that did not pass the normality test. All values were considered statistically significant at p < 0.05. Germany), which automatically delineates atrophied lesions with hypo-autofluorescence ( Figure 2), was used. Two ophthalmologists (S. H. and M. C.) measured the GA size, retinal vessel density, and retinal thickness independently of one another, and the averages of their reported values were used for analysis. To reduce the influence of baseline GA area on measured GA growth, square root transformation of GA area was implemented [24]. Follow-Up Analysis Patients were followed up every three to six months. In patients with 24-month follow-up data (±3 months) from the baseline, we compared the retinal vessel density and inner retinal thickness in GA eyes at baseline and last follow-up (24-month) to elucidate GA-related changes. Of the 14 patients, 12 were able to participate in the 24-months follow-up. During the follow-up period, the development of GA in the fellow eye was also observed, and the retinal vessel density of fellow eye with iAMD at last follow-up was compared with that of the baseline. Statistical Analyses Data are presented as means ± standard deviation (SD). Prism 7 (GraphPad Software Inc., San Diego, CA, USA) was used for the statistical and graphical analyses. The Shapiro-Wilk test was used to test the normality of the data distribution of visual acuity, Results Twenty-eight eyes of 14 patients with GA in one eye and iAMD in the fellow eye were included in the study. The study participants' demographics are presented in Table 1. The mean age of study participates was 76.6 ± 5.1 and the mean follow-up period was 25.11 ± 5.93 months. The logarithm of the minimum angle of resolution (logMAR) BCVA of eyes with GA was 0.43 ± 0.50 at baseline and 0.70 ± 0.62 at last follow-up (p = 0.086), while that of the fellow eye was 0.30 ± 0.21 at baseline and 0.31 ± 0.27 at last follow-up (p = 0.752); thus, the BCVA was significantly worse in the eyes with GA at last follow-up (p = 0.050) ( Table 1). The intraclass correlation coefficients of the two examiners for vessel density were 0.813 for SVC, 0.891 for DVC (both p < 0.001), and 0.855 for the measurement of GA size (p < 0.001). GA Eyes vs. Fellow Eyes at Baseline The retinal vessel densities in eyes with GA and fellow eyes at baseline were compared. The mean retinal vessel densities (%) were significantly lower in eyes with GA than in fellow eyes in the SVC (26.2 ± 3.9% vs. 28.3 ± 4.4%; p = 0.015) and DVC (24.2 ± 2.6 vs. 26.8 ± 1.9%; p = 0.003) (Figure 3). Follow-Up Analysis in Eyes with GA at Baseline To assess the association between the change of the GA area and vessel density, the square root of the GA area and retinal vessel densities were compared between baseline and last follow-up. The square root of the GA area was 0.86 ± 0.55 mm (0.22-2.15 mm) at baseline in the GA group. In the twelve eyes that could be followed-up, the square root of the GA area was 0.92 ± 0.57 mm at baseline and 1.54 ± 1.03 mm at last follow-up, suggesting significant growth (p = 0.007) (Figure 4a). The retinal vascular density in eyes with GA at baseline that were available for 24-month follow-up analysis was 25.9 ± 4.5 % in the SVC and 24.9 ± 7.5 % in the DVC at baseline and 24.8 ± 2.9 % in the SVC and 21.8 ± 4.9 % in the DVC at last follow-up, which showed significant change only in DVC (p = 0.224 and p = 0.036) (Figure 4b). When analyzing the association between changes in the square root of the GA area, significant association was found between the change of square root of the GA area and vessel density of DVC. (p = 0.584 for SVC and p = 0.047, r = -0.583 for DVC). Follow-Up Analysis in Eyes with GA at Baseline To assess the association between the change of the GA area and vessel density, the square root of the GA area and retinal vessel densities were compared between baseline and last follow-up. The square root of the GA area was 0.86 ± 0.55 mm (0.22-2.15 mm) at baseline in the GA group. In the twelve eyes that could be followed-up, the square root of the GA area was 0.92 ± 0.57 mm at baseline and 1.54 ± 1.03 mm at last follow-up, suggesting significant growth (p = 0.007) (Figure 4a). The retinal vascular density in eyes with GA at baseline that were available for 24-month follow-up analysis was 25.9 ± 4.5% in the SVC and 24.9 ± 7.5% in the DVC at baseline and 24.8 ± 2.9% in the SVC and 21.8 ± 4.9% in the DVC at last follow-up, which showed significant change only in DVC (p = 0.224 and p = 0.036) (Figure 4b). When analyzing the association between changes in the square root of the GA area, significant association was found between the change of square root of the GA area and vessel density of DVC. (p = 0.584 for SVC and p = 0.047, r = −0.583 for DVC). at baseline that were available for 24-month follow-up analysis was 25.9 ± 4.5 % in the SVC and 24.9 ± 7.5 % in the DVC at baseline and 24.8 ± 2.9 % in the SVC and 21.8 ± 4.9 % in the DVC at last follow-up, which showed significant change only in DVC (p = 0.224 and p = 0.036) (Figure 4b). When analyzing the association between changes in the square root of the GA area, significant association was found between the change of square root of the GA area and vessel density of DVC. (p = 0.584 for SVC and p = 0.047, r = -0.583 for DVC). Follow-Up Analysis in Fellow Eyes To assess the association between the development of GA and change of vessel density, retinal vessel densities at baseline and last follow-up of fellow eyes were evaluated. In the twelve eyes that were available for 24-month follow-up, the retinal vascular density in fellow eyes was 27.7 ± 4.0% in the SVC and 26.1 ± 2.1% in the DVC at baseline and 24.5 ± 4.1% in the SVC and 23.3 ± 4.2% in the DVC at last follow-up, which showed significant change in both layers (p = 0.011 and p = 0.024) (Figure 4c). Out of 12 eyes that were able to follow-up, seven fellow eyes (58.3%) newly developed GA during the followup period. The retinal vascular density in eyes which did not developed GA (n = 5) was 27.9 ± 4.8% in the SVC and 26.3 ± 2.6% in the DVC at baseline and 26.3 ± 4.8% in the SVC and 26.9 ± 2.1% in the DVC at last follow-up (p = 0.131 and p = 0.670) (Figure 4d). Of the eyes that developed GA (n = 7), retinal vessel density was 26.4 ± 3.0% at baseline and 23.8 ± 2.9% at last follow-up in SVC and 25.8 ± 2.2% at baseline and 22.4 ± 4.4% at last follow-up in the DVC, which showed a reduction and reached statistical significance in the DVC (p = 0.087 in SVC and 0.047 in DVC) (Figure 4e). GA Eyes vs. Fellow Eyes at Baseline The mean inner retinal and ONL thickness in eyes with GA and fellow eyes at baseline were compared. The mean inner retinal thickness values measured in the central 1-mm circle (fovea), inner 1-mm to 3-mm circle (parafovea), and outer 3-6-mm circle (perifovea) of the ETDRS are presented in Table 2. The GCIPL thickness at the foveal and parafoveal areas was significantly thinner in eyes with GA than in fellow eyes (p = 0.032 and =0.029, respectively). The NFL thickness and INL thickness did not exhibit any significant differences between the eyes of the individual. The ONL thickness at the foveal and parafoveal areas was thinner in eyes with GA than in fellow eyes but reached statistical significance in the foveal area (p = 0.024 and =0.065, respectively). All values are presented as mean ± standard deviation. GA = geographic atrophy; NFL = nerve fiber layer; GCIPL = ganglion cell-inner plexiform layer; INL = inner nuclear layer; ONL = outer nuclear layer. * Independent t-test. Follow-Up Analysis in Eyes with GA at Baseline The values for the retinal thickness at baseline and at last follow-up (n = 12) in eyes with GA that were available for follow-up analysis were compared to assess the change of inner retinal thickness during follow-up period ( Table 3). The GCIPL thickness was decreased in the outer 3-to 6-mm circle (perifovea) (p = 0.050), but the NFL and INL thickness did not show a significant reduction during the follow-up period. The ONL thickness was decreased in the outer ring 3-to 6-mm circle (perifovea) but did not reach statistical significance (p = 0.072). Table 3. Comparison of inner retinal thickness in eyes with GA between baseline and follow-up. Discussion The results of this study showed that superficial and deep retinal vascular density in eyes with GA was significantly reduced compared to fellow eyes with iAMD. We included patients who showed unilateral GA and iAMD in their fellow eyes to minimize the co-factors that affect the retinal vessel density and inner retinal thickness. We further excluded patients with CNV or a history of anti-VEGF injection, so we can conclude that the lower retinal vessel density was not triggered by anti-VEGF therapy. Previously, You et al. compared the retinal vessel density of GA patients and age-matched control patients and reported that there was a reduction in vessel density from 9% to 13% in the former group [10]. Toto et al. reported that the vessel density of the superficial capillary plexus was decreased relative to that in healthy controls (48.7% vs. 50.4%), even in iAMD patients without outer retinal degeneration, while the reduction in SVC was not observed in patients with early AMD [11,12]. While the previous studies compared eyes with GA (or iAMD) and healthy control, we analyzed retinal vascular density in patients with unilateral GA and iAMD in their fellow eyes, and demonstrated the reduction of retinal vessel density in GA eye than that of fellow eye with iAMD. With photoreceptor atrophy in GA, the synaptic activity of the outer plexiform layer decreases, which is thought to be accompanied by a decrease in the vessel density of the DVC [10]. For the reduction in SVC, it was suggested that oxygen diffusion increases in the inner retina from choriocapillaris as a result of atrophic changes in the outer retina, which in turn induces relative constriction of the retinal vasculature, which is a similar mechanism as that involved in pan-retinal photocoagulation [25,26]. In this study, the GCIPL of GA eyes was significantly thinner in the center 1 mm and inner 1-to 3-mm ETDRS circles than in fellow eyes without GA and showed significant reduction at follow up analysis. Inner retinal deterioration in GA eyes has been welldemonstrated in histological and imaging studies. Kim et al. reported in a histological report of eyes with GA that the outer nuclear layer (ONL) was severely reduced (76.9%), while the GCL was also reduced by approximately 30%, and that INL cells were relatively preserved [9]. A volumetric and thickness analysis using OCT scans also reported a decrease in GCL, which was proportional to the decrease in ONL in GA patients [7,8]. It is assumed that a chronic decrease in stimulation from photoreceptors in patients with GA causes a secondary loss of ganglion cells. A reduction in inner retinal thickness was also reported previously in eyes with iAMD, suggesting that the damage started in all retinal layers, not just the outer retina [12,27,28]. Borrelli et al. reported ganglion cell complex thinning in patients with intermediate AMD with OCT as well as reduced reflectivity in the inner and outer segments of photoreceptors and suggested photoreceptor neuronal loss in intermediate AMD [29]. Further, it is found that a 6% and a 2.5% longitudinal thinning of the GCL thickness in the fellow eyes with early and intermediate AMD of patients treated for neovascular AMD in one eye [30,31]. Although it is unclear whether the reduction in retinal vessel density in GA eyes is a secondary change due to photoreceptor loss or a concomitant preceding change before ONL atrophy occurs. The observation of a decrease in retinal vessel density and GCIPL thickness compared to fellow eyes with iAMD suggests several hypotheses. As the sequent neuroretinal degeneration develops in intermediate stages AMD to GA, the neurodegenerative mechanism in iAMD might be followed by a reduction in oxygen demand and a resulting reduction in blood flow, or the inner retina might be damaged by progressive hypoperfusion secondary to vascular damage by AMD. In the follow-up analysis, we observed growth in the size of the square root of GA area by approximately 0.62 mm during the 2 years. Previously, GA progression rates reported in the literature ranged from 0.53 to 2.6 mm 2 /year [32,33] and the presence of GA in one eye is a strong risk factor of future GA in the fellow eye [34]. There was a significant change in retinal vessel density in the deep vascular complex in the follow-up analysis of GA eyes and negative correlation between the change of square root of the GA area and vessel density of DVC. As the deep vascular complex is mainly located in INL, the vessel density reduction is thought to indicate a decrease in perfusion rather than the effect of volume reduction of INL, as there was no thickness reduction in INL at follow up, in comparison to baseline. In the fellow eye analysis, the eyes that showed GA development during the follow-up period and reduction in retinal vessel density were found in the SVC and DVC, although statistical significance was confirmed only in DVC. We speculate that this result supports inner retinal deterioration, which, together with the loss of ganglion cell nuclei and axons, might be accompanied by the progression of GA with a loss of photoreceptors, followed by retinal capillary constriction. Retinal perfusion might be impacted by reduced metabolism due to neurodegenerative changes in the inner retina and reduced synaptic activity. The finding of the growth in GA size and changes in retinal vessel density of SVC during the follow-up period not having a significant correlation suggests that a reduction in SVC perfusion might be a secondary sequela rather than a cause of GA. Similarly, Seddon et al. reported that the loss of RPE in GA precedes the loss of choroidal vessels because the choroidal capillary is well-preserved at the edge of the lesion [35]. The limitations of the present study include its retrospective design and relatively small sample size. To overcome this limitation, we included patients who showed unilateral GA and iAMD in their fellow eyes, which might minimize the co-factors that affect the retinal vessel density and inner retinal thickness. A prospective study with a larger sample size is necessary to confirm our findings and to discern whether inner retinal changes precede retinal capillary changes. Furthermore, the square root of GA area in this study varied among patients from 0.22-2.15 mm, which might have produced an error that may have influenced the findings of retinal vessel density changes during follow-up, as both cases of early and advanced GA were included. In addition, during OCTA imaging in patients with advanced GA, segmentation errors might have occurred despite our efforts to eliminate such errors. Furthermore, we did not compare the flow voids in choriocapillaris between eyes with GA and iAMD, as we used spectral domain OCT with light at 870 nm wavelength, which was reported to have poor penetration into the choroid layer than swept source OCT (wavelength of 1050 nm) [36,37]; additionally, the drusen in iAMD eyes might induce signal attenuation of choriocapillaris layer [38]. Despite these limitations, to the best of our knowledge, this is the first comparison of vessel density of retinal capillary plexuses in patients with unilateral GA and could enrich our knowledge of the pathophysiology of GA. Conclusions In conclusion, the GA eyes showed a significantly reduced retinal capillary density in the SVC and DVC relative to fellow eyes with iAMD. In addition, there were significantly reduced thickness values of the GCIPL in GA eyes. This reduction in retinal vessel density is thought to be a secondary change to GA and might be a marker for GA secondary to non-neovascular AMD. Informed Consent Statement: Patient consent was waived due to the retrospective design and deidentified patient data in this study (approved by the Institutional Review Board). Data Availability Statement: The data that support the findings of this study are available from the corresponding author, Choi M, on reasonable request.
v3-fos-license
2019-04-12T13:53:06.236Z
2002-01-01T00:00:00.000
109173543
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.irrodl.org/index.php/irrodl/article/download/45/535", "pdf_hash": "56b3192aaa62ba33065368adba195191baffa15c", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:28", "s2fieldsofstudy": [ "Education", "Computer Science" ], "sha1": "e70d45c948d3642a64a6d31a4d0cdfcf201efac6", "year": 2002 }
pes2o/s2orc
2. Selection of Collaborative Tools The previous report summarised the findings of an online survey concerning Master’s of Distance Education students’ attitudes to online collaborative tools. The respondents in the study were 135 graduate students and faculty members of Athabasca University’s Centre for Distance Education (CDE). They demonstrated particular interest in tools that offer the following features: file sharing; automatic synchronisation of documentation for the group; audio conferencing; text chat; and privacy. In the effort to respond to this interest, the Centre conducted a series of trials of conferencing and other file-sharing products. This report discusses the merits and disadvantages of current collaborative methods, and problems faced by distance educators and their students in seeking to adopt them. Trials of Free Products Seven online products/ services were reviewed (April to June /2001) in their most up-to-date versions. Emphasis was placed upon whether or not each provided the online collaborative features found to be useful by the needs assessment study (Report 2 in this series). The relevant features of each application are summarised in Appendix 1. 1. NetMeeting: At this point, no free product appears to provide all of the features that the students find potentially useful. NetMeeting approaches this level, though it is not a cross-platform application (i.e. Macs as well as PCs), and provides only multi-point audio for a limited number of users only. This product is also infrequently updated: version 3.01 has currently been in place for over a year, although we observe that it is being included as a useful ‘Accessory’ within Windows 2000 and higher versions. 2. ICQ: provides text chat, instant messaging, file transfer, and two-party audio, using the same protocol (H.323) as NetMeeting. However, ICQ uses the business practice of providing its subscription lists to advertisers, which many of the students surveyed in our sample found instrusive. ICQ is not a cross-platform application. Kane & Baggaley, Technical Evaluation Report 2: Selection of Collaborative Tools 2 3. Roger Wilco: provides audio chat rooms, though few other features. (A similar product, HearMe, ceased operation since these product trials were conducted.) 4. PalTalk: provides reliable audio and text conferencing, including private groups, instant messaging, and a file transfer facility. For many online groups, PalTalk appears to be most appropriate application so far examined in these trials. It is not a cross-platform application, though is reliable, available in free and fee-based versions, and requires little technical sophistication. The free version is supported by pop-up ads that appear at launch and shutdown, though these can be blocked by anti-popup software. PalTalk has recently subsumed the subscription list of FireTalk (the reliable though now defunct audio-conferencing product previously favoured by the CDE faculty and students). 5. Stuffincommon: is a free online service that provides many of the functions requested by the students. The service provides room(s) containing whiteboard and chat facilities, for self-defined communities. The Stuffincommon whiteboard is superior to other common whiteboard tools in allowing users to add URL shortcuts, files, Post-it notes, and images. Each community has its own rooms, into which only invited parties may enter; and a community can create rooms for specific functions that it may define. Each community is a separate Web site, and is thus, it is not platform-dependent. It requires Web browser software though no other software download. Stuffincommon lacks an audio facility, though could be used in conjunction with a product such as PalTalk. Privacy is provided by a login requirement and a community members’ list. An Integrated Product Trial In view of the difficulty of identifying no-cost products satisfying CDE students’ perceived needs, it was decided to test a product which, although not free, integrates all of the desired online learning tools. A new product named Groove was identified, a peer-to-peer collaboration application providing a wide variety of functions: audio conferencing, text chat, privacy, filesharing, automatic synchronisation of meeting notes, private discussion boards, and a high degree of personal security. At the time of testing (Summer 2001), its first edition was available at no charge (Groove 1.1, Preview Edition). It was a work in progress, with the fully licensed application due for release later in the year. A team of eight CDE members took part in the tests, including six students and two faculty members. A features comparison of all seven products featured in these tests is presented in Appendix 1. a. The Product: Groove operates across a network in a peer-to-peer mode; i.e., communication among participants is direct rather than via a central server. This provides privacy and security, and a potential decrease of data transmission time. The product’s design is based on the concept of “shared space” (i.e., a private meeting place), within which alternative modes of communication may be employed. Groove is implemented as a set of encrypted files on each participant’s computer. Each “space” contains a list of Kane & Baggaley, Technical Evaluation Report 2: Selection of Collaborative Tools 3 members, their shared applications, and their accumulated data. An individual user can use several shared spaces, and can define each of them on multiple computers. Membership in a shared space is by email invitation only, thus, security and mobility are provided. b. Results of Testing: Faculty and student members of the CDE tested Groove during the period of May to September 2001. The product involves a 14 MB software download, and is resource-intensive, drawing upon approximately 32 MB of RAM memory during usage. Its use in the CDE program at this time would therefore be a problem for some CDE users, if only for the 9/ 135 survey respondents with computers limited to this amount of RAM. c. The bandwidth requirements of Groove represent a more serious problem. Severe data loss and break-ups in audio transmission were observed during the tests. In addition, we found that a text box message could take up to several minutes to reach the other participants. This problem has been reduced in a subsequent version of the program (Build 940). A member of the Groove technical support group confirmed that the breakup of audio transmission is a bandwidth problem, at least on dial-up. Sixty-nine of the 135 respondents to the CDE poll use dial-up Internet access. d. Groove is a ‘message-intensive’ application: i.e., much status messaging is transmitted during a meeting, with regard to the session’s progress. The product provides pop-up displays for many of these functions (“message being sent”, “message sent”, etc.,). It even displays “xxx is typing a message” during text chats. This heavy message load may contribute to the product’s high bandwidth requirement. The process by which the application coordinates the simultaneous contributions of participants is known as synchronisation. This is required when participants wish to make simultaneous entries into a shared tool (e.g., the notepad), when some members of a group are absent, or when a member makes entries whilst offline. The tests of Groove v1.1 indicated that connection to a Groove meeting can take three to five minutes, owing to this synchronisation process, even between computers in the same room, connected by a 100 MBS local area network (LAN). During the synchronisation of these two computers, audio transmission halted while the data was being transferred from one computer to another. Once the computers were synchronised, the audio time lag between them increased from almost imperceptible to approximately three seconds. e. Groove's Future Status: Groove’s support technician states that improvements in audio transmission are a high priority, though they could not provide a date for this to be achieved. Groove will continue to offer a free preview edition after the licensed version is released, but this will not contain feature upgrades, and will not receive the same maintenance priority as the licensed version. The product’s support staff advised us that a Kane & Baggaley, Technical Evaluation Report 2: Selection of Collaborative Tools Résumé de l'article The previous report summarised the findings of an online survey concerning Master's of Distance Education students' attitudes to online collaborative tools. The respondents in the study were 135 graduate students and faculty members of Athabasca University's Centre for Distance Education (CDE). They demonstrated particular interest in tools that offer the following features: file sharing; automatic synchronisation of documentation for the group; audio conferencing; text chat; and privacy. In the effort to respond to this interest, the Centre conducted a series of trials of conferencing and other file-sharing products. This report discusses the merits and disadvantages of current collaborative methods, and problems faced by distance educators and their students in seeking to adopt them. Trials of Free Products Seven online products/ services were reviewed (April to June /2001) in their most up-to-date versions. Emphasis was placed upon whether or not each provided the online collaborative features found to be useful by the needs assessment study (Report 2 in this series). The relevant features of each application are summarised in Appendix 1. 1. NetMeeting: At this point, no free product appears to provide all of the features that the students find potentially useful. NetMeeting approaches this level, though it is not a cross-platform application (i.e. Macs as well as PCs), and provides only multi-point audio for a limited number of users only. This product is also infrequently updated: version 3.01 has currently been in place for over a year, although we observe that it is being included as a useful 'Accessory' within Windows 2000 and higher versions. 2. ICQ: provides text chat, instant messaging, file transfer, and two-party audio, using the same protocol (H.323) as NetMeeting. However, ICQ uses the business practice of providing its subscription lists to advertisers, which many of the students surveyed in our sample found instrusive. ICQ is not a cross-platform application. 3. Roger Wilco: provides audio chat rooms, though few other features. (A similar product, HearMe, ceased operation since these product trials were conducted.) 4. PalTalk: provides reliable audio and text conferencing, including private groups, instant messaging, and a file transfer facility. For many online groups, PalTalk appears to be most appropriate application so far examined in these trials. It is not a cross-platform application, though is reliable, available in free and fee-based versions, and requires little technical sophistication. The free version is supported by pop-up ads that appear at launch and shutdown, though these can be blocked by anti-popup software. PalTalk has recently subsumed the subscription list of FireTalk (the reliable though now defunct audio-conferencing product previously favoured by the CDE faculty and students). Stuffincommon : is a free online service that provides many of the functions requested by the students. The service provides room(s) containing whiteboard and chat facilities, for self-defined communities. The Stuffincommon whiteboard is superior to other common whiteboard tools in allowing users to add URL shortcuts, files, Post-it notes, and images. Each community has its own rooms, into which only invited parties may enter; and a community can create rooms for specific functions that it may define. Each community is a separate Web site, and is thus, it is not platform-dependent. It requires Web browser software though no other software download. Stuffincommon lacks an audio facility, though could be used in conjunction with a product such as PalTalk. Privacy is provided by a login requirement and a community members' list. An Integrated Product Trial In view of the difficulty of identifying no-cost products satisfying CDE students' perceived needs, it was decided to test a product which, although not free, integrates all of the desired online learning tools. A new product named Groove was identified, a peer-to-peer collaboration application providing a wide variety of functions: audio conferencing, text chat, privacy, filesharing, automatic synchronisation of meeting notes, private discussion boards, and a high degree of personal security. At the time of testing (Summer 2001), its first edition was available at no charge (Groove 1.1, Preview Edition). It was a work in progress, with the fully licensed application due for release later in the year. A team of eight CDE members took part in the tests, including six students and two faculty members. A features comparison of all seven products featured in these tests is presented in Appendix 1. a. The Product: Groove operates across a network in a peer-to-peer mode; i.e., communication among participants is direct rather than via a central server. This provides privacy and security, and a potential decrease of data transmission time. The product's design is based on the concept of "shared space" (i.e., a private meeting place), within which alternative modes of communication may be employed. Groove is implemented as a set of encrypted files on each participant's computer. Each "space" contains a list of members, their shared applications, and their accumulated data. An individual user can use several shared spaces, and can define each of them on multiple computers. Membership in a shared space is by email invitation only, thus, security and mobility are provided. b. Results of Testing: Faculty and student members of the CDE tested Groove during the period of May to September 2001. The product involves a 14 MB software download, and is resource-intensive, drawing upon approximately 32 MB of RAM memory during usage. Its use in the CDE program at this time would therefore be a problem for some CDE users, if only for the 9/ 135 survey respondents with computers limited to this amount of RAM. c. The bandwidth requirements of Groove represent a more serious problem. Severe data loss and break-ups in audio transmission were observed during the tests. In addition, we found that a text box message could take up to several minutes to reach the other participants. This problem has been reduced in a subsequent version of the program (Build 940). A member of the Groove technical support group confirmed that the breakup of audio transmission is a bandwidth problem, at least on dial-up. Sixty-nine of the 135 respondents to the CDE poll use dial-up Internet access. d. Groove is a 'message-intensive' application: i.e., much status messaging is transmitted during a meeting, with regard to the session's progress. The product provides pop-up displays for many of these functions ("message being sent", "message sent", etc.,). It even displays "xxx is typing a message" during text chats. This heavy message load may contribute to the product's high bandwidth requirement. The process by which the application coordinates the simultaneous contributions of participants is known as synchronisation. This is required when participants wish to make simultaneous entries into a shared tool (e.g., the notepad), when some members of a group are absent, or when a member makes entries whilst offline. The tests of Groove v1.1 indicated that connection to a Groove meeting can take three to five minutes, owing to this synchronisation process, even between computers in the same room, connected by a 100 MBS local area network (LAN). During the synchronisation of these two computers, audio transmission halted while the data was being transferred from one computer to another. Once the computers were synchronised, the audio time lag between them increased from almost imperceptible to approximately three seconds. e. Groove's Future Status: Groove's support technician states that improvements in audio transmission are a high priority, though they could not provide a date for this to be achieved. Groove will continue to offer a free preview edition after the licensed version is released, but this will not contain feature upgrades, and will not receive the same maintenance priority as the licensed version. The product's support staff advised us that a Mac version is one of the company's highest priorities, and that Groove and Apple Corporation were still negotiating this issue. They indicated that the size of the product download package will not be reduced from its current size of 14 MB. These tests led Athabasca University's software evaluation team to the following conclusions. a. Integrated applications offer more products than all or most other collaborative tools. However, the problems with integrated software are similar to those encountered with the integrated tape-slide educational technologies of the 1970s. For the users that need to use all of the features simultaneously, the package can be bulky and cumbersome; while for those who only need one or two simultaneous features, the package's contents are excessive. The CDE will continue to monitor the evolution of the new integrated product, Groove, in graduate classes that cover technical issues. b. Simpler products that provide fewer simultaneous applications (e.g., PalTalk, with its superior audio-conferencing and ancillary functions) are the most immediately convenient for students to download and install as collaborative tools. c. The Stuffincommon website can be recommended to the general CDE population, as a convenient means for text chat, sharing files and Web links, images, and notes on digital Post-its. It can be used in conjunction with a product such as PalTalk, and is a crossplatform application. The next report in this series will review text-based conferencing applications. N.B. Owing to the speed with which Web addresses become outdated, online references are not cited in these summary reports. They are available, together with updates to the current report, at the Athabasca University software evaluation site: cde.athabascau.ca/softeval/. Italicised product names in this report can be assumed to be registered trademarks.
v3-fos-license
2022-02-01T04:47:47.840Z
2022-01-17T00:00:00.000
257038003
{ "extfieldsofstudy": [ "Physics", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2075-1680/13/3/138/pdf?version=1708573103", "pdf_hash": "b39109577dab44629d23cbee8998ea08949c00a7", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:29", "s2fieldsofstudy": [ "Physics" ], "sha1": "0bcdffd0c756aaaf45e24f5628eea19fdea1985f", "year": 2022 }
pes2o/s2orc
Solving particle-antiparticle and cosmological constant problems We argue that fundamental objects in particle theory are not elementary particles and antiparticles but objects described by irreducible representations (IRs) of the de Sitter (dS) algebra. One might ask why, then, experimental data give the impression that particles and antiparticles are fundamental and there are conserved additive quantum numbers (electric charge, baryon quantum number and others). The matter is that, at the present stage of the universe, the contraction parameter $R$ from the dS to the Poincare algebra is very large and, in the formal limit $R\to\infty$, one IR of the dS algebra splits into two IRs of the Poincare algebra corresponding to a particle and its antiparticle with the same masses. The problem why the quantities $(c,\hbar,R)$ are as are does not arise because they are contraction parameters for transitions from more general Lie algebras to less general ones. Then the baryon asymmetry of the universe problem does not arise. At the present stage of the universe, the phenomenon of cosmological acceleration (PCA) is described without uncertainties as an inevitable {\it kinematical} consequence of quantum theory in semiclassical approximation. In particular, it is not necessary to involve dark energy the physical meaning of which is a mystery. In our approach, background space and its geometry are not used and $R$ has nothing to do with the radius of dS space. In semiclassical approximation, the results for PCA are the same as in General Relativity if $\Lambda=3/R^2$, i.e., $\Lambda>0$ and there is no freedom in choosing the value of $\Lambda$. General principles of quantum theory In this paper we solve the particle-antiparticle and cosmological constant problems proceeding from quantum theory, which postulates: H) Various states of the system under consideration are elements of a Hilbert space H with a positive definite metric, that is, the norm of any non-zero element of H is positive. O) Each physical quantity is defined by a self-adjoint operator O in H. S) Symmetry at the quantum level is defined by a self-adjoint representation of a real Lie algebra A in H such that the representation operator of any basis element of A is self-adjoint. These conditions guarantee the probabilistic interpretation of quantum theory.We explain below that in the approaches to solving these problems that are described in the literature, not all of these conditions have been met. Problems with space-time background in quantum theory Modern fundamental particle theories (QED, QCD and electroweak theory) are based on the concept of particle-antiparticle.Historically, this concept has arisen as a consequence of the fact that the Dirac equation has solutions with positive and negative energies.The solutions with positive energies are associated with particles, and the solutions with negative energies -with corresponding antiparticles.And when the positron was found, it was a great success of the Dirac equation.Another great success is that in the approximation (v/c) 2 , the Dirac equation reproduces the fine structure of the hydrogen atom with a very high accuracy. However, now we know that there are problems with the physical interpretation of the Dirac equation.For example, in higher order approximations, the probabilistic interpretation of non-quantized Dirac spinors is lost because they are described by representations induced from non-self-adjoined representations of the Lorenz algebra.Moreover, this problem exists for any functions described by local relativistic covariant equations (Klein-Gordon, Dirac, Rarita-Schwinger and others).So, a space of functions satisfying a local covariant equation does not satisfy the conditions (H, O, S). As shown by Pauli [1], in the case of fields with an integer spin it is not possible to define a positive-definite charge operator while in the case of fields with a half-integer spin it is not possible to define a positive-definite energy operator. Another fundamental problem in the interpretation of the Dirac equation is as follows.One of the key principles of quantum theory is the principle of superposition.This principle states that if ψ 1 and ψ 2 are possible states of a physical system then c 1 ψ 1 + c 2 ψ 2 , when c 1 and c 2 are complex coefficients, also is a possible state.The Dirac equation is the linear equation, and, if ψ 1 (x) and ψ 2 (x) are solutions of the equation, then c 1 ψ 1 (x) + c 2 ψ 2 (x) also is a solution.In the spirit of the Dirac equation, there should be no separate particles the electron and the positron.It should be only one particle such that electron states are the states of this particle with positive energies, positron states are the states of this particle with negative energies and the superposition of electron and positron states should not be prohibited.However, in view of charge conservation, baryon number conservation and lepton numbers conservation, the superposition of a particle and its antiparticle is prohibited. Modern particle theories are based on Poincare symmetry which, according to S), is defined by a self-adjoint representation of the Poincare algebra.In these theories, elementary particles, by definition, are described by self-adjoined irreducible representations (IRs) of the Poincare algebra.Such IRs have a property that energies in them can be either strictly positive or strictly negative but there are no IRs where energies have different signs.The objects described by positive-energy IRs are called particles, the objects described by negative-energy IRs are called antiparticles, and their energies become positive after second quantization.There are no elementary particles which are superpositions of a particle and its antiparticle, and as noted above, this is not in the spirit of the Dirac equation. The problems in interpreting non-quantized solutions of the Dirac equation are well known, but they are described to illustrate the problems that arise when trying to describe a particle and its antiparticle within the framework of solutions of a non-quantized local covariant equation. In particle theories, only quantized Dirac spinors ψ(x) are used.However, there are also problems in interpreting quantized solutions of the Dirac equation.Here x is treated as a point in Minkowski space.However, ψ(x) is an operator in the Fock space for an infinite number of particles.Each particle in the Fock space can be described by its own coordinates in the approximation when the position operator exists [2].Then the following question arises: why do we need an extra coordinate x which does not have any physical meaning because it does not belong to any particle and so is not measurable?If we accept that physical quantities should be treated in the framework of O) then x is not a physical quantity because there is no self-adjoint operator for x. A justification of the presence of x in quantized solutions of local covariant equations is that in quantum field theories (QFT) the Lagrangian density depends on x, but this is only the integration parameter in the intermediate stage.The goal of the theory is to construct the S-matrix, and, when the theory is already constructed, one can forget about Minkowski space because no physical quantity depends on x.This is in the spirit of the Heisenberg S-matrix program according to which in relativistic quantum theory it is possible to describe only transitions of states from the infinite past when t → −∞ to the distant future when t → ∞. The fact that the theory gives the S-matrix in momentum representation does not mean that the coordinate description is excluded.In typical situations, the position operator in momentum representation exists not only in the nonrelativistic case but in the relativistic case as well.It is known as the Newton-Wigner position operator [3] or its modifications.However, the coordinate description of elemen-tary particles can work only in some approximations.In particular, even in most favorable scenarios, for a massive particle with the mass m, its coordinates cannot be measured with the accuracy better than the particle Compton wave length h/mc. When there are many bodies, the impression may arise that they are in some space but this is only an impression.Background spacetime (e.g., Minkowski space) is only a mathematical concept needed in classical theory.For example, in QED we deal with electrons, positrons and photons.When the position operator exists, each particle can be described by its own coordinates.In quantum theory the coordinates of Minkowski space do not have a physical meaning because they are not described by self-adjoined operators, do not refer to any particle and are not measurable.However, in classical electrodynamics we do not consider electrons, positrons and photons.Here the concepts of the electric and magnetic fields (E(x), B(x)) have the meaning of the mean contribution of all particles in the point x of Minkowski space. This situation is analogous to that in statistical physics.Here we do not consider each particle separately but describe the mean contribution of all particles by temperature, pressure etc.Those quantities have a physical meaning not for each separate particle but for ensembles of many particles. Space-time background is the basic element of QFT.There is no branch of science where so impressive agreements between theory and experiment have been achieved.However, those successes have been achieved only in perturbation theory while it is not known how the theory works beyond that theory.Also, the level of mathematical rigor in QFT is very poor and, as a result, QFT has several known difficulties and inconsistencies. One of the key inconsistencies of QFT is the following.It is known (see e.g., the textbook [4]) that quantum interacting local fields can be treated only as operatorial distributions.A known fact from the theory of distributions is that the product of distributions at the same point is not a correct mathematical operation.Physicists often ignore this problem and use such products because, in their opinion, it preserves locality (although the operator of x does not exist).As a consequence, the representation operators of interacting systems in QFT are not well defined and the theory contains anomalies and divergences.While in renormalizable theories the problem of divergences can be circumvented at the level of perturbation theory, in quantum gravity divergences cannot be excluded even in lowest orders of perturbation theory.As noted above, in spite of such mathematical problems, QFT is very popular since it has achieved successes in describing many experimental data. In the present paper, we consider particle-antiparticle and cosmological constant problems.In our approach, for solving those problems there is no need to involve space-time background and the problems can be solved using only rigorous mathematics. Symmetry at quantum level In the literature, symmetry in QFT is usually explained as follows.Since Poincare group is the group of motions of Minkowski space, the system under consideration should be described by unitary representations of this group.This implies that the representation generators commute according to the commutation relations of the Poincare group Lie algebra: where µ, ν = 0, 1, 2, 3, η µν = 0 if µ = ν, η 00 = −η 11 = −η 22 = −η 33 = 1, P µ are the operators of the four-momentum and M µν are the operators of Lorentz angular momenta.This approach is in the spirit of the Erlangen Program proposed by Felix Klein in 1872 when quantum theory did not yet exist.However, although the Poincare group is the group of motions of Minkowski space, the description (1.1) does not involve this group and this space. As noted in Sec.1.1, background space is only a mathematical concept: in quantum theory, each physical quantity should be described by an operator but there are no operators for the coordinates of background space.There is no law that every physical theory must contain a background space.For example, it is not used in nonrelativistic quantum mechanics and in IRs describing elementary particles.In particle theory, transformations from the Poincare group are not used because, according to the Heisenberg S-matrix program, it is possible to describe only transitions of states from the infinite past when t → −∞ to the distant future when t → +∞.In this theory, systems are described by observable physical quantities -momenta and angular momenta.So, symmetry at the quantum level is defined not by a background space and its group of motions but by the condition S) (see [2,5] for more details).In particular, Eq. (1.1) can be treated as the definition of relativistic invariance at the quantum level. Then each elementary particle is described by a self-adjoined IR of a real Lie algebra A and a system of N noninteracting particles is described by the tensor product of the corresponding IRs.This implies that, for the system as a whole, each momentum operator is a sum of the corresponding single-particle momenta, each angular momentum operator is a sum of the corresponding single-particle angular momenta, and this is the most complete possible description of this system.In particular, nonrelativistic symmetry implies that A is the Galilei algebra, relativistic symmetry implies that A is the Poincare algebra, de Sitter (dS) symmetry implies that A is the dS algebra so (1,4) and anti-de Sitter (AdS) symmetry implies that A is the AdS algebra so (2,3). In his famous paper "Missed Opportunities" [6] Dyson notes that: • a) Relativistic quantum theories are more general than nonrelativistic quantum theories even from purely mathematical considerations because Poincare group is more symmetric than Galilei one: the latter can be obtained from the former by contraction c → ∞. • b) dS and AdS quantum theories are more general than relativistic quantum theories even from purely mathematical considerations because dS and AdS groups are more symmetric than Poincare one: the latter can be obtained from the former by contraction R → ∞ where R is a parameter with the dimension length, and the meaning of this parameter will be explained below. • c) At the same time, since dS and AdS groups are semisimple, they have a maximum possible symmetry and cannot be obtained from more symmetric groups by contraction. As noted above, symmetry at the quantum level should be defined in the framework of S), and in [2], the statements a)-c) have been reformulated in terms of the corresponding Lie algebras.It has also been shown that the fact that quantum theory is more general than classical theory follows even from purely mathematical considerations because formally the classical symmetry algebra can be obtained from the symmetry algebra in quantum theory by contraction h → 0. For these reasons, the most general description in terms of ten-dimensional Lie algebras should be carried out in terms of quantum dS or AdS symmetry.However, as explained below, in particle theory, dS symmetry is more general than AdS one. The definition of those symmetries is as follows.If M ab (a, b = 0, 1, 2, 3, 4, M ab = −M ba ) are the angular momentum operators for the system under consideration, they should satisfy the commutation relations: where η ab = 0 if a = b, η 00 = −η 11 = −η 22 = −η 33 = 1 and η 44 = ∓1 for the dS and AdS symmetries, respectively.Although the dS and AdS groups are the groups of motions of dS and AdS spaces, respectively, the description in terms of (1.2) does not involve those groups and spaces, and it is a definition of dS and AdS symmetries in the framework of S) (see the discussion in [2,5]).In QFT, interacting particles are described by field functions defined on Minkowski, dS and AdS spaces.However, since we consider only noninteracting bodies and describe them in terms of IRs, at this level we don't need these fields and spaces. The procedure of contraction from dS or AdS symmetry to Poincare one is defined as follows.If we define the momentum operators P µ as P µ = M 4µ /R (µ = 0, 1, 2, 3) then in the formal limit when R → ∞, M 4µ → ∞ but the quantities P µ are finite, Eq. (1.2) become Eq.(1.1).Here R is a parameter which has nothing to do with the dS and AdS spaces.As seen from Eq. (1.2), quantum dS and AdS theories do not involve the dimensional parameters (c, h, R) because (kg, m, s) are meaningful only at the macroscopic level. As noted by Berry [7], the reduction from more general theories to less general ones involves a quantity δ which is not equal to zero in more general theories and becomes zero in less general theories.This reduction involves the study of limits and is often obstructed by the fact that the limit is singular.In [7], several examples of such reductions are considered.However, at the quantum level, the reduction (contraction) should be described in terms of relations between the representation operations of more general and less general algebras. As explained in [2], in the limit when the contraction parameter goes to zero or infinity, some original representation operators become singular (in agreement with the results of [7]).However, it is possible to define a new set of operators such that they remain finite in this limit.Then, in less general theories, some commutators become zero while in more general theories they are non-zero.So, less general theories contain more zero commutators then corresponding more general theories. Probably, the most known case is the reduction from relativistic to nonrelativistic theory.In relativistic theory, the quantity c is not needed, velocities v are dimensionless and, if v = |v| then v ≤ 1 if tachyons are not taken into account.However, if people want to describe velocities in m/s then c also has the dimension m/s.Physicists usually understand that physics cannot (and should not) derive that c ≈ 3 • 10 8 m/s.This value is purely kinematical (i.e., it does not depend on gravity and other interactions) and is as is simply because people want to describe velocities in m/s.Since the quantities (m, s) have a physical meaning only at the macroscopic level, one can expect that the values of c in m/s are different at different stages of the universe.In [7], the connection between relativistic and nonrelativistic theories is described in the "low-speed" series expansions in δ = v/c.However, such expansions are well defined only in classical (non-quantum) theory.At the quantum level, this reduction should be described in terms of relations between the representation operations of the Poincare and Galilei algebras.Then, in agreement with [7], the transition from relativistic to nonrelativistic theory becomes singular in the formal limit c → ∞.As described in [2,8], the singularities can be resolved by using the Galilei boost operators G j = M 0j /c, (j = 1, 2, 3) instead of the Poincare boost operators M 0j and by using the time translation operator E = P 0 c instead of the Poincare energy operator P 0 .Then, as follows from Eq. (1.1), instead of the relations So far, no approximations have been made.A question arises whether the strong limits of the operators M jk /c 2 are zero when c → ∞.In general, not for all elements x of the Hilbert space under consideration, y = (M jk /c 2 )x become zero when c → ∞.The meaning of the nonrelativistic approximation at the operator level is that only those elements x are important for which y → 0 when c → ∞. Therefore, in the nonrelativistic approximation, [G j , G k ] = 0 and we have a greater number of zero commutators because in the relativistic case, [M 0j , M 0k ] = 0. And, since M 0j = G j c, we conclude that, when c → ∞, the operators M 0j become singular in agreement with the observation in [7]. Consider now the relation between classical and quantum theories.In the latter, the quantity h is not needed and angular momenta are dimensionless.As shown even in textbooks, their projections can take only the values multiple to ±1/2.However, when people want to describe angular momenta in kg • m 2 /s, h and all the operators in Eq. (1.2) become dimensional and also have the dimension kg •m 2 /s.Then all nonzero commutators in the symmetry algebra become proportional to h and Eq.(1.2) can be represented as [M ab , M cd ] = ihA abcd . Physicists usually understand that physics cannot (and should not) derive that h ≈ 1.054 • 10 −34 kg • m 2 /s.This value is purely kinematical and is as is simply because people want to describe angular momenta in kg • m 2 /s.Since the quantities (kg, m, s) have a physical meaning only at the macroscopic level, one can expect that the values of h in kg • m 2 /s are different at different stages of the universe.If A abcd = 0 then, in general, not for all elements x of the Hilbert space under consideration, y = hA abcd x become zero when h → 0. The meaning of the classical approximation is that only those elements x are important for which y → 0 when h → 0. Therefore, in this approximation, all the commutators become zero and all physical quantities are defined without uncertainties.So, even the description in terms of Hilbert spaces becomes redundant. Typically, in particle theories, the quantities c and h are not involved and it is said that the units c = h = 1 are used. At the quantum level, Eq. (1.2) is the most general description of dS and AdS symmetries and all the operators in Eq. (1.2) are dimensionless.At this level, the theory does not need the quantity R and, in full analogy with the above discussion of the quantities c and h, one can say R = 1 is a possible choice.The dimensional quantity R arises if, instead of the dimensionless operators M 4µ , physicists want to deal with the 4-momenta P µ defined such that M 4µ = RP µ .In full analogy with the discussion of c and h, physics cannot (and should not) derive the value of R. It is as is simply because people want to measure distances in meters.This value is purely kinematical, i.e., it does not depend on gravity and other interactions.As noted in Sec. 3.4, at the present stage of the universe, R is of the order of 10 26 m but, since the concept of meter has a physical meaning only at the macroscopic level, one can expect that the values of R in meters are different at different stages of the universe. Although, at the level of contraction parameters, R has nothing to do with the radius of the background space and is fundamental to the same extent as c and h, physicists usually want to treat R as the radius of the background space.In General Relativity (GR) which is the non-quantum theory, the cosmological constant Λ equals ±3/R 2 for the dS and AdS symmetries, respectively.Physicists usually believe that physics should derive the value of Λ and that the solution to the dark energy problem depends on this value.They also believe that QFT of gravity should confirm the experimental result that, in units c = h = 1, Λ is of the order of 10 −122 /G where G is the gravitational constant.We will discuss this problem in Sec.3.4. As follows from Eq. (1.2), [M 4µ , M 4ν ] = iM µν .Therefore [P µ , P ν ] = iM µν /R 2 .A question arises whether the strong limits of the operators M µν /R 2 are zero when R → ∞.In general, not for all elements x of the Hilbert space under consideration, y = (M µν /R 2 )x become zero when R → ∞.The meaning of the Poincare approximation at the operator level is that only those elements x are important for which y → 0 when R → ∞.Therefore, in the Poincare approximation, [P µ , P ν ] = 0 and we have a greater number of zero commutators because in the dS and AdS cases, [M 4µ , M 4ν ] = 0. And, since M 4µ = P µ R, we conclude that, when R → ∞, the operators M 4µ become singular in agreement with the observation in [7]. Chapter 2 Solving particle-antiparticle problem 2.1 Particles and antiparticles in standard quantum theory Standard particle theories are based on Poincare symmetry, and here the concepts of particles and antiparticles are considered from the point of view of two approaches which we call Approach A and Approach B. We first recall the basic known facts about IRs of the Poincare algebra.Their classification has been first given by Wigner [9] and then repeated by many authors (see e.g., [10]). We denote E = P 0 the energy operator and P = (P 1 , P 2 , P 3 ) the spatial momentum operator.Then W = E 2 − P 2 is the Casimir operator of the Poincare algebra, i.e., it commutes with all operators of the algebra.As follows from the Schur lemma, W has only one eigenvalue in every IR.We will not consider tachyons and then this eigenvalue is ≥ 0 and can be denoted m 2 where m ≥ 0 is called the particle mass.We will consider massive IRs where m > 0 and the case m = 0 will be mentioned below. Let p be the particle four-momentum such that p 2 = (p 0 ) 2 − p 2 = m 2 .We denote v = p/m the particle four-velocity such that v 2 = 1.Then v 2 0 = 1 + v 2 and we will always choose v 0 such that v 0 ≥ 1.Let dρ(v) = d 3 v/v 0 be Lorentz invariant volume element on the Lorentz hyperboloid.If s is the spin of the particle under consideration, then we use ||...|| to denote the norm in the space of unitary IR of the group SU(2) with the spin s.Then the space of a self-adjoned IR of the Poincare algebra is the space of functions f (v) on the Lorentz hyperboloid with the range in the space of IR of the group SU(2) with the spin s and such that Then the operators of the IR are given by [9,10] where J = {M 23 , M 31 , M 12 }, N = {M 01 , M 02 , M 03 }, s is the spin operator, l(v) = −iv × ∂/∂v and ± refers to the IRs with positive and negative energies, respectively.Approach A is based on the fact that, as follows from Eq. (2.1), in self-adjoined IRs of the Poincare algebra, the energy spectrum can be either ≥ 0 or ≤ 0, and there are no IRs where the energy spectrum contains both, positive and negative energies.In this approach, the objects described by the corresponding IRs are called elementary particles and antiparticles, respectively.On the other hand, Approach B proceeds from the assumptions that elementary particles are described by local covariant equations.The solutions of these equations with positive energies are called particles and solutions with negative energies are called antiparticles When we consider a system consisting of particles and antiparticles, the energy signs for both of them should be the same.Indeed, consider, for example a system of two particles with the same mass, and let their momenta p 1 and p 2 be such that p 1 + p 2 = 0.Then, if the energy of particle 1 is positive, and the energy of particle 2 is negative then the total four-momentum of the system would be zero what contradicts experimental data.By convention, the energy sign of all particles and antiparticles in question is chosen to be positive.For this purpose, the procedure of second quantization is defined such that after this procedure the energies of antiparticles become positive.Then the mass of any particle is the minimum value of its energy. Suppose now that we have two particles such that particle 1 has the mass m 1 , spin s 1 and is characterized by some additive quantum numbers (e.g., electric charge, baryon quantum number etc.), and particle 2 has the mass m 2 , spin s 2 = s 1 and all additive quantum numbers characterizing particle 2 equal the corresponding quantum numbers for particle 1 with the opposite sign.A question arises when particle 2 can be treated as an antiparticle for particle 1.Is it necessary that m 1 should be exactly equal m 2 or m 1 and m 2 can slightly differ each other?In particular, can we guarantee that the mass of the positron exactly equals the mass of the electron, the mass of the proton exactly equals the mass of the antiproton etc.? If we work only in the framework of Approach A then we cannot answer this question because here IRs for particles 1 and 2 are independent on each other and there are no limitations on the relation between m 1 and m 2 . On the other hand, in Approach B, m 1 = m 2 but this has been achieved at the expense of losing probabilistic interpretation.Indeed, here, a particle and its antiparticle are elements of the same field state ψ(x) with positive and negative energies, respectively, where x is a vector from Minkowski space and ψ(x) satisfies a relativistic covariant field equation (Dirac, Klein-Gordon, Rarita-Schwinger and others).However, it has been already noted in Sec.1.1 that, at the quantum level, covariant fields and the quantity x are not defined in the framework of (H, O, S).In particular, at the quantum level, the physical meaning of x is unclear because there is no operator for x. A usual phrase in the literature is that in QFT, the fact that m 1 = m 2 follows from the CPT theorem.As shown e.g., in [11,12], it is a consequence of locality since, by construction, states described by local covariant equations are direct sums of IRs for a particle and its antiparticle with equal masses.However, since the concept of locality is not formulated in the framework of (H, O, S), this concept does not have a clear physical meaning, and this fact has been pointed out even in known textbooks (see e.g., [4]).Therefore, QFT does not give a rigorous proof that m 1 = m 2 . Also, can one pose the question what is happening if locality is only an approximation: in that case the equality of masses is exact or approximate?However, since, at the quantum level, the physical meaning of the concept of locality is unclear, the physical meaning of this question is also unclear.Consider a simple model when electromagnetic and weak interactions are absent.Then the fact that the proton and the neutron have equal masses has nothing to do with locality; it is only a consequence of the fact that they belong to the same isotopic multiplet, i.e., they are simply different states of the same object-the nucleon. Note that in Poincare invariant quantum theories, there can exist elementary particles for which all additive quantum numbers are zero.Such particles are called neutral because they coincide with their antiparticles. Particles and antiparticles in AdS quantum theories In theories where the symmetry algebra is the AdS algebra, the structure of IRs is known (see e.g., [2,13]).The operator M 04 is the AdS analog of the energy operator.Let W be the Casimir operator M ab M ab where a sum over repeated indices is assumed.Here lowering and raising indices are carried out using the tensor η ab defined in Sec.1.2 and, as noted after Eq. (1.2), η 44 = 1 for the AdS case.As follows from the Schur lemma, the operator W has only one eigenvalue in every IR.By analogy with Poincare quantum theory, we will not consider AdS tachyons and then one can define the AdS mass µ such that µ ≥ 0 and µ 2 is the eigenvalue of the operator W . As noted in Sec.1.2, the procedure of contraction from the AdS algebra to the Poincare one is defined in terms of the parameter R such that M ν4 = RP ν .This procedure has a physical meaning only if R is rather large.In that case the AdS mass µ and the Poincare mass m are related as µ = Rm, and the relation between the AdS and Poincare energies is analogous.Since AdS symmetry is more general then Poincare one then µ is more general than m.In contrast to the Poincare masses and energies, the AdS masses and energies are dimensionless.As noted in Sec.3.4, at the present stage of the universe R is of the order of 10 26 m.Then the AdS masses of the electron, the Earth and the Sun are of the order of 10 39 , 10 93 and 10 99 , respectively.The fact that even the AdS mass of the electron is so large might be an indication that the electron is not a true elementary particle.In addition, the present upper level for the photon mass is 10 −17 ev.This value seems to be an extremely tiny quantity.However, the corresponding AdS mass is of the order of 10 16 , and so, even the mass which is treated as extremely small in Poincare invariant theory might be very large in AdS invariant theory. In the AdS case, there are IRs with positive and negative energies, and they belong to the discrete series [2,13].Therefore, one can define particles and antiparticles.If µ 1 is the AdS mass for a positive energy IR, then the energy spectrum contains the eigenvalues µ 1 , µ 1 + 1, µ 1 + 2, ...∞, and, if µ 2 is the AdS mass for a negative energy IR, then the energy spectrum contains the eigenvalues −∞, .. Therefore, the situation is pretty much analogous to that in Poincare invariant theories, and, without involving local AdS invariant equations there is no way to conclude whether the mass of a particle equals the mass of the corresponding antiparticle.These equations describe local fields in the AdS space.In view of what was said above about the background space in QFT, these fields are not defined within the framework of (H, O, S).Therefore, in AdS invariant theory, just as in the case of Poincare invariant theory, within the framework of (H, O, S) it is also impossible to prove that the mass of a particle equals the mass of the corresponding antiparticle. Since Poincare quantum theory is obtained from AdS quantum theory by contraction R → ∞ and m = µ/R then Poincare massless IRs are obtained from AdS IRs not only when µ = 0 but when µ is any finite number.In Poincare quantum theories, massless particles are characterized such that for them helicity is the conserved quantum number.For this reason, as shown in [13] (see also [2]), the AdS massless particles are described by IRs where µ = 2 + s.Before the discovery of neutrino oscillations, neutrinos were treated as massless with the left-handed helicity and antineutrinos -as massless with the right-handed helicity, but now they are treated as massive particles.The photon is usually treated as massless although, as noted in [14], QED will not be broken if the photon has a small nonzero mass.In contrast to the neutrino case, it is described not by IRs of the purely Poincare algebra but by IRs of the Poincare algebra with spatial reflections added (see e.g., [10]).For this reason, the photon is the neutral particle because it coincides with its own antiparticle. Problems with the definition of particles and antiparticles in dS quantum theories In this section we explain why the description of particles and antiparticles in the case of dS symmetry considerably differs from that in the cases of Poincare and AdS symmetries described in the preceding sections. The Casimir operator W = 1 2 M ab M ab is now defined in the same way as in the AdS case but, as noted after Eq. (1.2), η 44 = −1 for the dS case.By analogy with the AdS case, it follows from the Schur lemma that the operator W has only one eigenvalue in every IR, one can define the dS mass µ such that µ ≥ 0 and µ 2 is the eigenvalue of the operator W . In his book [15] Mensky describes the construction of unitary IRs of the dS group using the theory of induced representations (see e.g., [16,17]).In [2] we describe how this construction can be used for constructing self-adjoined IRs of the dS algebra.Here we explicitly describe two implementations of such a construction: when the representation space is a space of functions on two Lorentz hyperboloids and when it is a space of functions on the three-dimensional unit sphere in the four-dimensional space. In the first case, the space of IR is the space of functions (f 1 (v), f 2 (v)) on two Lorentz hyperboloids with the range in the space of unitary IR of the group SU(2) with the spin s and such that where, as in Sec.2.1, s is the spin operator and ||...|| is the norm in the space of unitary IR of the group SU(2) with the spin s. In this case, the explicit calculation [2] shows that the action of representation operators on functions with the support on the first hyperboloid is where µ > 0 is a parameter which can be called the dS mass, J, N and l(v) are given by the same expressions as in Sec.2.1, B = {M 41 , M 42 , M 43 } and E = M 40 .At the same time, the action on functions with the support on the second hyperboloid is given by Note that the expressions for the action of the Lorentz algebra operators on the first and second hyperboloids are the same and coincide with the corresponding expressions for IRs of the Poincare algebra in Eq. (2.1).At the same time, the expressions for the action of the operators M 4µ on the first and second hyperboloids differ by sign. In the second case, the representation space is the space of functions on the group SU (2).Its elements can be represented by the points u = (u, u 4 ) of the three-dimensional sphere S 3 in the fourdimensional space as u 4 + iσu where σ are the Pauli matrices and u 4 = ±(1 − u 2 ) 1/2 for the upper and lower hemispheres, respectively.Then the Hilbert space of the IR is the space of functions ϕ(u) on S 3 with the range in the space of the unitary IR of the su(2) algebra with the spin s and such that where du is the SO(4) invariant volume element on S 3 .The explicit calculation [2] shows that the operators have the form where the relation between the points of the upper hemisphere and the first hyperboloid is u = v/v 0 and u 4 = (1 − u 2 ) 1/2 .The relation between the points of the lower hemisphere and the second hyperboloid The equator of S 3 where u 4 = 0 has measure zero with respect to the upper and lower hemispheres.For this reason one might think that it is of no interest for describing particles in dS theory.Nevertheless, while none of the components of u has the magnitude greater than unity, the points of the equator in terms of velocities is characterized by the condition that |v| is infinitely large and therefore standard Poincare momentum p = mv is infinitely large too.This poses a question whether p always has a physical meaning.From mathematical point of view, Eq. (2.4) might seem more convenient than Eqs.(2.2) and (2.3) since S 3 is compact and there is no need to break it into the upper and lower hemispheres.However, Eqs. (2.2) and (2.3) are convenient for investigating Poincare approximation while the expressions (2.4) are not convenient for this purpose because the Lorentz boost operators N in them depend on µ. Indeed, if we define then in the formal limit when R → ∞, µ → ∞ but E, P and m remain finite, Eqs.(2.2) and (2.3) become Eq.(2.1) for positive and negative energy IRs of the Poincare algebra, respectively.Therefore, dS symmetry is broken in the formal limit R → ∞ because one IR of the dS algebra splits into two IRs of the Poincare algebra with positive and negative energies and with equal masses. Since the number of states in dS IRs is twice as big as the number of states in IRs of the Poincare algebra, one might think that each IR describes a particle and its antiparticle simultaneously.But this is not true even from the fact that when we talk about a particle and its antiparticle, we mean that there are two different IRs, but in this case there is only one IR.In addition, the question of what is the mass difference between a particle and its antiparticle if R is finite has no physical meaning because, according to the Schur lemma, the operator W has only one eigenvalue in this IR and all states have the same mass µ.Another argument that this is not true is as follows. Let us call states with the support of their wave functions on the first hyperboloid or on the northern hemisphere as particles and states with the support on the second hyperboloid or on the southern hemisphere as their antiparticles.The physical meaning of such definitions is problematic since there is no guaranty that the energy of particles is always positive and the energy of antiparticles is always negative.Nev-ertheless, even with such a definition, states which are superpositions of a particle and its antiparticle obviously belong to the representation space under consideration, i.e., they are not prohibited.However, this contradicts the superselection rule that the wave function cannot be a superposition of states with opposite electric charges, baryon and lepton quantum numbers etc.Therefore, in the dS case, there are no superselection rules which prohibit superpositions of states with opposite electric charges, baryon quantum numbers etc.In addition, in this case it is not possible to define the concept of neutral particles, i.e., particles which coincide with their antiparticles (e.g., the photon).This question will be discussed in Chap. 4. As noted in Sec.1.2 and shown in the discussion of Eq. (2.6), dS symmetry is more general than Poincare one, and the latter can be treated as a special degenerate case of the former in the formal limit R → ∞.This means that, with any desired accuracy, any phenomenon described in the framework of Poincare symmetry can be also described in the framework of dS symmetry if R is chosen to be sufficiently large, but there also exist phenomena for explanation of which it is important that R is finite and not infinitely large (see [2]). The fact that dS symmetry is higher than Poincare one is clear even from the fact that, in the framework of the latter symmetry, it is not possible to describe states which are superpositions of states on the upper and lower hemispheres.Therefore, breaking one dS IR into two independent IRs defined on the northern and southern hemispheres obviously breaks the initial symmetry of the problem.This fact is in agreement with the Dyson observation (mentioned above) that dS group is more symmetric than Poincare one. When R → ∞, standard concepts of particle-antiparticle, electric charge and baryon and lepton quantum numbers are restored, i.e., in this limit superpositions of particle and antiparticle states become prohibited according to the superselection rules.Therefore, those concepts have a rigorous physical meaning only as a result of symmetry breaking at R → ∞, but if R is finite they can be only good approximations when R is rather large. The observable equality of masses of particles and their corresponding antiparticles can be now explained as a consequence of the fact that observable properties of elementary particles can be described not by exact Poincare symmetry but by dS symmetry with a very large but finite value of R. In this approximation, for combining a particle and its antiparticle into one object, there in no need to assume locality and involve local field functions because a particle and its antiparticle already belong to the same IR of the dS algebra (compare with the above remark about the isotopic symmetry in the proton-neutron system).As noted above, in this approximation it is not correct to pose the question about the mass difference between a particle and its antiparticle because they have the same mass µ.However, it is correct to pose the following problem. When R is finite but very large, the concepts of electric charge and baryon number are not precise, but make sense with very high accuracy.Let us assume that our experiment shows that there are particles with electric charge e and −e.In the formal limit R → ∞ there can be no particles which are superpositions of states with the charges e and −e.However, in the approximation when R is very large but finite and the concept of electric charge is meaningful with very high accuracy, such superpositions are possible.In that case, if in some experiment we observe protons then with a very small probability we can observe antiprotons.For example, if in some experiment we observe elastic scattering of protons on a neutral target T , p + T → p + T then with a very small probability we will observe the process p + T → p + T .With the current value of R, the probability of such a process is negligible, but in the early stages of the universe it can be noticeable.But the calculation of the probability of such a process can only be carried out when a particle theory based on dS symmetry rather than Poincare symmetry, is constructed. dS vs. AdS and baryon asymmetry of the universe problem In this chapter we have discussed how the concepts of particles and antiparticles should be defined in the cases of Poincare, AdS and dS symmetries.In the first two cases, the situations are similar: IRs where the energies are ≥ 0 are treated as particles, and IRs where the energies are ≤ 0 are treated as antiparticles.Then a problem arises how to prove that the masses of a particle and the corresponding antiparticle are the same.As noted in Secs.2.1 and 2.2, without involving local covariant equations there is no way to conclude whether it is the case.Since the concept of locality is not formulated in the framework of (H, O, S), QFT does not give a rigorous proof that the masses of a particle and the corresponding antiparticle are the same. As described in Sec.2.3, in the case of dS symmetry, the approach to the concept of particle-antiparticle is radically different from the approaches in the cases of Poincare and AdS symmetries.Here, the fundamental objects are not particles and antiparticles, but objects described by self-adjoined IRs of the dS algebra.One might ask why, then, experimental data in particle physics give the impression that particles and antiparticles are fundamental.As explained in Sec.2.3, the matter is that, at this stage of the universe, the contraction parameter R from the dS to Poincare algebra is very large and, in the formal limit R → ∞, one IR of the dS algebra splits into two IRs of the Poincare algebra corresponding to a particle and its antiparticle with the same masses.In this case, for proving the equality of masses there is no need to involve local covariant fields and the proof is given fully in the framework of (H, O, S).As noted in Sec.1.1, in the spirit of the Dirac equation, there should not be separate particles the electron and positron, but there should be one particle combining them.In the case of dS symmetry, this idea is implemented exactly in this way.It has been also noted that in this case there are no conservation laws for additive quantum numbers: from the experiment it seems that such conservation laws take place, but in fact, these laws are only approximate because, at the present stage of the universe the parameter R is very large.Thus, we can conclude that dS symmetry is more fundamental than Poincare and AdS symmetries. We now discuss the dS vs. AdS problem from the point of view whether standard gravity can be obtained in the framework of a free theory.In standard nonrelativistic approximation, gravity is characterized by the term −Gm 1 m 2 /r in the mean value of the mass operator.Here m 1 and m 2 are the particle masses and r is the distance between the particles.Since the kinetic energy is always positive, the free nonrelativistic mass operator is positive definite and therefore there is no way to obtain gravity in the framework of a free theory.Analogously, in Poincare invariant theory, the spectrum of the free two-body mass operator belongs to the interval [m 1 + m 2 , ∞) while the existence of gravity necessarily requires that the spectrum should contain values less than m 1 + m 2 . As explained in Sec.2.2, in theories where the symmetry algebra is the AdS algebra, for positive energy IRs, the AdS Hamiltonian has the spectrum in the interval [µ, ∞) and µ > 0 has the meaning of the mass.Therefore the situation is pretty much analogous to that in Poincare invariant theories.In particular, the free two-body mass operator again has the spectrum in the interval [µ 1 + µ 2 , ∞) and therefore there is no way to reproduce gravitational effects in the free AdS invariant theory. In contrast to the situation in Poincare and AdS invariant theories, the free mass operator in dS theory is not bounded below by the value of µ 1 + µ 2 .The discussion in Sec.2.3 shows that this property by no means implies that the theory is unphysical.In the dS case, there is no law prohibiting that in the nonrelativistic approximation, the mean value of the mass operator contains the term −Gm 1 m 2 /r.Therefore if one has a choice between Poincare, AdS and dS symmetries then the only chance to describe gravity in a free theory is to choose dS symmetry, and, as discussed in [2], a possible nature of gravity is that gravity is a kinematical effect in a quantum theory based not on complex numbers but on a finite ring or field.This is an additional argument in favor of dS vs. AdS. We now apply this conclusion to the known problem of baryon asymmetry of the universe (BAU).This problem is formulated as follows.According to modern particle and cosmological theories, the numbers of baryons and antibaryons in the early stages of the universe were the same.Then, since the baryon number is the conserved quantum number, those numbers should be the same at the present stage.However, at this stage, the number of baryons is much greater than the number of antibaryons. However, as noted above, it seems to us that the baryon quantum number is conserved because at the present stage of the evolution of the universe, the value of R is enormous.As noted in Sec.1.2, it is reasonable to expect that R changes over time, and as noted in Sec.3.3, in semiclassical approximation, R coincides with the radius of the universe.As noted in Sec.2.3, even if R is very large but finite then there is a non-zero probability of transitions particle↔antiparticle.But, according to cosmological theories, at early stages of the universe, R was much less that now.At such values of R, the very concepts of particles, antiparticles and baryon number do not have a physical meaning.So, the statement that at early stages of the universe the numbers of baryons and antibaryons were the same, also does not have a physical meaning, and, as a consequence, the BAU problem does not arise. Chapter 3 Solving cosmological constant problem Introduction At the present stage of the universe (when semiclassical approximation is valid), in the phenomenon of cosmological acceleration (PCA), only nonrelativistic macroscopic bodies are involved, and one might think that here there is no need to involve quantum theory.However, ideally, the results for every classical (i.e., non-quantum) problem should be obtained from quantum theory in semiclassical approximation.We will see that, considering PCA from the point of view of quantum theory sheds a new light on understanding this problem. In PCA, it is assumed that the bodies are located at large (cosmological) distances from each other and sizes of the bodies are much less than distances between them.Therefore, interactions between the bodies can be neglected and, from the formal point of view, the description of our system is the same as the description of N free spinless elementary particles. However, in the literature, PCA is usually considered in the framework of dark energy and other exotic concepts.In Sec.3.2 we argue that such considerations are not based on rigorous physical principles.In Sec.1.2 we have explained how symmetry should be defined at the quantum level, and in Sec.3.3 we describe PCA in the framework of our approach. History of dark energy This history is well-known.Immediately after the creation of GR, Einstein believed that, since, in his opinion, the universe is stationary, the cosmological constant Λ in his equations must be non-zero, and this point of view has been described in his paper [18] written in 1917.On the other hand, in 1922, Friedman found solutions of equations of GR with Λ = 0 to provide theoretical evidence that the universe is expanding [19].The author of [20] states that Lundmark was the first person to find observational evidence for expansion in 1924three years before Lemaître and five years before Hubble, but, for some reasons, Lundmark's research was not adopted and his paper was not published.In 1927, Lemaître independently reached a similar conclusion to Friedman on a theoretical basis, and also presented observational evidence (based on the Doppler effect) for a linear relationship between distance to galaxies and their recessional velocity [21].In paper [22] written in 1929, Hubble described his results which observationally confirmed Lundmark's and Lemaître's findings. According to Gamow's memories, after Hubble showed Einstein the results of observations at the Mount Wilson observatory, Einstein said that introducing Λ = 0 was the biggest blunder of his life.After that, the statement that Λ must be zero was advocated even in textbooks. The explanation was that, according to the philosophy of GR, matter creates a curvature of space-time, so when matter is absent, there should be no curvature, i.e., space-time background should be the flat Minkowski space.That is why when in 1998 it was realized that the data on supernovae could be described only with Λ = 0, the impression was that it was a shock of something fundamental.However, the terms with Λ in the Einstein equations have been moved from the left-hand side to the right-hand one, it was declared that in fact Λ = 0, but the impression that Λ = 0 was the manifestation of a hypothetical field which, depending on the model, was called dark energy or quintessence.In spite of the fact that, as noted in wide publications (see e.g., [23] and references therein), their physical nature remains a mystery, the most publications on PCA involve those concepts. Several authors criticized this approach from the following considerations.GR without the contribution of Λ has been confirmed with a high accuracy in experiments in the Solar System.If Λ is as small as it has been observed, it can have a significant effect only at cosmological distances while for experiments in the Solar System, the role of such a small value is negligible.The authors of [24] titled "Why All These Prejudices Against a Constant?"note that it is not clear why we should think that only a special case Λ = 0 is allowed.If we accept the theory containing the gravitational constant G which is taken from outside, then why can't we accept a theory containing two independent constants?Let us note that currently there is no physical theory which works under all conditions.For example, it is not correct to extrapolate nonrelativistic theory to cases when speeds are comparable to c and to extrapolate classical physics for describing energy levels of the hydrogen atom.GR is a successful non-quantum theory for describing macroscopic phenomena where large masses are present, but extrapolation of GR to the case when matter disappears is not physical.One of the principles of physics is that a definition of a physical quantity is a description of how this quantity should be measured.As noted in Sec.2.1, the concepts of space and its curvature are purely mathematical.Their aim is to describe the motion of real bodies.But the concepts of empty space and its curvature should not be used in physics because nothing can be measured in a space which exists only in our imagination.Indeed, in the limit of GR when matter disappears, space remains and has a curvature (zero curvature when Λ = 0, positive curvature when Λ > 0 and negative curvature when Λ < 0) while, since space is only a mathematical concept for describing matter, a reasonable approach should be such that in this limit space should disappear too. A common principle of physics is that, when a new phenomenon is discovered, physicists should try to first explain it proceeding from the existing science.Only if all such efforts fail, something exotic can be involved.But for PCA, an opposite approach was adopted: exotic explanations with dark energy or quintessence were accepted without serious efforts to explain the data in the framework of existing science. Although the physical nature of dark energy and quintessence remains a mystery, there exists a wide literature where the authors propose QFT models of them.For example, as noted in [25], there are an almost endless number of explanations for dark energy.While in most publications, only proposals about future discovery of dark energy are considered, the authors of [23] argue that dark energy has already been discovered by the XENON1T collaboration.In June 2020, this collaboration reported an excess of electron recoils: 285 events, 53 more than expected 232 with a statistical significance of 3.5σ.However, in July 2022, a new analysis by the XENONnT collaboration discarded the excess [26]. Several authors (see e.g., [25,27,28]) proposed approaches where some quantum fields manifest themselves as dark energy at early stages of the universe, and some of them are active today.However, as shown in our publications and in the present paper, at least at the present stage of the universe (when semiclassical approximation is valid), PCA can be explained without uncertainties proceeding from universally recognized results of physics and without involving models and/or assumptions the validity of which has not been unambiguously proved yet. Explanation of cosmological acceleration Standard particle theories involve self-adjoined IRs of the Poincare algebra.They are described even in textbooks and do not involve Minkowski space.Therefore, when Poincare symmetry is replaced by more general dS or AdS one, dS and AdS particle theories should be based on self-adjoined IRs of the dS or AdS algebras.However, physicists usually are not familiar with such IRs because they believe that dS and AdS quantum theories necessarily involve quantum fields on dS or AdS spaces, respectively.The mathematical literature on unitary IRs of the dS group is wide but there are only a few papers where such IRs are described for physicists.For example, the excellent Mensky's book [15] exists only in Russian.At the same time, to the best of our knowledge, self-adjoint IRs of the dS algebra have been described from different considerations only in [29,30,31,32], and some of those results have been mentioned in Sec.2.3.It has been noted that the space of an IR consists of functions defined on two hyperboloids and in the limit R → ∞ one IR of the dS algebra splits into two IRs of the Poincare algebra with positive and negative energies. As noted in Sec.3.1, the results on IRs can be applied not only to elementary particles but even to macroscopic bodies when it suf-fices to consider their motion as a whole.We will consider this case and will consider the operators M 4µ not only in Poincare approximation but taking into account dS corrections.If those corrections are small, it suffices to consider only states with the support on the upper hyperboloid and describe the representation operators by Eq. (2.2). We define the quantities E, P, m by Eq. (2.6) and consider the non-relativistic approximation when |v| ≪ 1.If we wish to work with units where the dimension of velocity is m/s, we should replace v by v/c.If p = mv then it is clear from the expressions for B in Eq. (2.2) that p becomes the real momentum P only in the limit R → ∞. The operators in Eq. (2.2) act in momentum representation and at this stage, we have no spatial coordinates yet.For describing the motion of particles in terms of spatial coordinates, we must define the position operator.A question: is there a law defining this operator?The postulate that the coordinate and momentum representations are related by the Fourier transform was taken at the dawn of quantum theory by analogy with classical electrodynamics, where the coordinate and wave vector representations are related by this transform.But the postulate has not been derived from anywhere, and there is no experimental confirmation of the postulate beyond the nonrelativistic semiclassical approximation.Heisenberg, Dirac, and others argued in favor of this postulate but, for example, in the problem of describing photons from distant stars, the connection between the coordinate and momentum representations should be not through the Fourier transform, but as shown in [2].However, since, PAC involves only nonrelativistic bodies then the position operator in momentum representation can be defined as usual, i.e., as r = ih∂/∂p, and in semiclassical approximation, we can treat p and r as usual vectors. Then as follows from Eq. (2.2) where H = E − mc 2 is the classical nonrelativistic Hamiltonian.As follows from these expressions Here the last term is the dS correction to the non-relativistic Hamiltonian.Now it follows from the Hamilton equations that even one free particle is moving with the acceleration where a is the acceleration, r is the radius vector and Λ = 3/R 2 .The observed quantities are not absolute but relative with respect to a body that is chosen as the reference frame.We can take into account that the representation describing a free N-body system is the tensor product of the corresponding single-particle IRs.It means that every N-body operator M ab is a sum of the corresponding singleparticle operators. Consider a system of two free particles described by the variables P j and r j (j = 1, 2).Define standard nonrelativistic variables Then explicit calculations (see e.g., Eq. ( 61) in [29], Eq. ( 17) in [31] or Eq. ( 17) in [32]) give that the two-body mass is M(q 12 , r 12 ) = m 1 + m 2 + H nr (r 12 , q 12 ), H nr (r, q) = q 2 2m 12 − m 12 c 2 r 2 2R 2 (3.5)where H nr is the internal two-body Hamiltonian and m 12 is the reduced two-particle mass.Then, as a consequence of the Hamilton equations, in semiclassical approximation, the relative acceleration is again given by Eq. (3.3) but now a is the relative acceleration and r is the relative radius vector. From a formal point of view, such a calculation must be carried out to confirm mathematically that here standard non-relativistic concepts work since, from the point of view of these concepts, if, for example, the acceleration of the first particle is a 1 and of the second is a 2 , then their relative acceleration equals a 1 − a 2 . The fact that the relative acceleration of noninteracting bodies is not zero does not contradict the law of inertia, because this law is valid only in the case of Galilei and Poincare symmetries, and in the formal limit R → ∞, a becomes zero as it should be.Since c is the contraction parameter for the transition from Poincare invariant theory to Galilei invariant one, the results of the latter can be obtained from the former in the formal limit c → ∞, and Galilei invariant theories do not contain c.Then one might ask why Eq. (3.3) contains c although we assume that the bodies in PCA are nonrelativistic.The matter is that Poincare invariant theories do not contain R but we work in dS invariant theory and assume that, although c and R are very large, they are not infinitely large, and the quantity c 2 /R 2 in Eq. (3.3) is finite. As noted in Sec.2.4, dS symmetry is more general than AdS one.Formally, an analogous calculation using the results of Chap.8 of [2] on IRs of the AdS algebra gives that, in the AdS case, a = −rc 2 /R 2 , i.e., we have attraction instead of repulsion.The experimental facts that the bodies repel each other confirm that dS symmetry is indeed more general than AdS one. The relative accelerations given by Eq. (3.3) are the same as those derived from GR if the curvature of dS space equals Λ = 3/R 2 , where R is the radius of this space.However, the crucial difference between our results and the results of GR is as follows.While in GR, R is the radius of the dS space and can be arbitrary, as explained in detail in Sec.1.2, in quantum theory, R has nothing to do with the radius of the dS space, it is the coefficient of proportionality between M 4µ and P µ , it is fundamental to the same extent as c and h, and a question why R is as is does not arise.Therefore, our approach gives a clear explanation why Λ is as is. The fact that two free particles have a relative acceleration is known for cosmologists who consider dS symmetry at the classical level.This effect is called the dS antigravity.The term antigravity in this context means that particles repulse rather than attract each other.In the case of the dS antigravity, the relative acceleration of two free particles is proportional (not inversely proportional!) to the distance between them.This classical result (which in our approach has been obtained without involving dS space and Riemannian geometry) is a special case of dS symmetry at the quantum level when semiclassical approximation works with a good accuracy. As follows from Eq. (3.3), the dS antigravity is not important for local physics when r ≪ R. At the same time, at cosmological distances the dS antigravity is much stronger than any other interaction (gravitational, electromagnetic etc.).One can consider the quantum two-body problem with the Hamiltonian H nr given by Eq. (3.5).Then it is obvious that the spectrum of the operator H nr is purely continuous and belongs to the interval (−∞, ∞) (see also [29,33] for details).This does not mean that the theory is unphysical since stationary bound states in standard theory become quasistationary with a very large lifetime if R is large. In the literature it is often stated that quantum theory of gravity should become GR in classical approximation.In Sec.1.1 we argue that this is probably not the case because at the quantum level the concept of space-time background does not have a physical meaning.Our results for the cosmological acceleration obtained from semiclassical approximation to quantum theory are compatible with GR but in our approach, space-time background is absent from the very beginning. In GR, the result (3.3) does not depend on how Λ is interpreted, as the curvature of empty space or as the manifestation of dark energy.However, in quantum theory, there is no freedom of interpretation.Here R is the parameter of contraction from the dS Lie algebra to the Poincare one, it has nothing to do with the radius of the background space and with dark energy and it must be finite because dS symmetry is more general than Poincare one. Discussion We have shown that, at the present stage of the universe (when semiclassical approximation is valid), the phenomenon of cosmological acceleration is simply a kinematical consequence of quantum theory in semiclassical approximation, and this conclusion has been made without involving models and/or assumptions the validity of which has not been unambiguously proved yet. The concept of the cosmological constant Λ has been originally defined in GR which is the purely classical (i.e., not quantum) theory.Here Λ is the curvature of space-time background which, as noted in Secs.1.1 and 2.1, is a purely classical concept.Our consideration does not involve GR, and, as explained in Sec.1.2, the contraction parameter R from dS invariant to Poincare invariant theory has nothing to do with the radius of dS space. However, in QFT, Λ is interpreted as vacuum energy density, and the cosmological constant problem is described in a wide literature (see e.g.[34] and references therein).Usually, this problem is considered in the framework of Poincare invariant QFT of gravity on Minkowski space.This theory contains only one phenomenological parameter -the gravitational constant G, and Λ is defined by the vacuum expectation value of the energy-momentum tensor.The theory contains strong divergencies which cannot be eliminated because the theory is not renormalizable.The results can be made finite only with a choice of the cutoff parameter.Since G is the only parameter in the theory, the usual choice of the cutoff parameter in momentum space is h/l P where l P is the Plank length.Then, if h = c = 1, G has the dimension length 2 and Λ is of the order of 1/G.This value is approximately by 122 orders of magnitude greater than the experimental one, and this situation is called vacuum catastrophe.It is discussed in a wide literature how the discrepancy with experiment can be reduced, but the problem remains. The approach to finding Λ in terms of G cannot be fundamental for several reasons.First of all, as noted in Sec.1.2, fundamental dS and AdS quantum theories originally do not contain dimensional parameters.The dimensional quantities (c, h, R) can be introduced to those theories only as contraction parameters for transions from more general theories to less general ones.QFT of gravity is based on Poincare symmetry which is a special degenerate case of dS and AdS symmetries in the formal limit R → ∞.This theory contains G, but it is not explained how G is related to contraction from dS and AdS symmetries to Poincare symmetry.Also, as noted in Sec.1.1, in quantum theories involving space-time background the conditions (H, O, S) are not met and such theories contain mathematical inconsistencies.The problem of constructing quantum theory of gravity is one of the most fundamental problems in modern theory and the assumption that this theory will be Poincare invariant QFT is not based on rigorous physical principles. In any case, as follows from the very problem statement about the cosmological acceleration, Λ should not depend on G. Indeed, as noted in Sec.3.1, in this problem, it is assumed that the bodies are located at large (cosmological) distances from each other and sizes of the bodies are much less than distances between them.Therefore, all interactions between the bodies (including gravitational ones) can be neglected and, from the formal point of view, the description of our system is the same as the description of N free spinless elementary particles. As explained in detail in Sec.1.2, at the quantum level, the parameter R is fundamental to the same extent as c and h, it has nothing to do with the relation between Minkowski and dS spaces and the problem why R is as is does not arise by analogy with the problem why c and h are as are.As noted in Sec.3.3, the results for cosmological acceleration in our approach and in GR are given by the same expression (3.3) but the crucial difference between our approach and GR is as follows.While in GR, R is the radius of the dS space and can be arbitrary, in our approach, R is defined uniquely because it is a parameter of contraction from the dS algebra to the Poincare one.Therefore, our approach explains why the cosmological constant is as. Therefore, at the present stage of the universe (when semiclassical approximation is valid), the phenomenon of cosmological acceleration has nothing to do with dark energy or other artificial reasons.This phenomenon is an inevitable kinematical consequence of quantum theory in semiclassical approximation and the vacuum catastrophe and the problem of cosmological constant do not arise. Since 1998, it has been confirmed in several experiments [35] that Λ > 0, and Λ = 1.3 • 10 −52 /m 2 with the accuracy 5%.Therefore, at the current stage of the universe, R is of the order of 10 26 m.Since Λ is very small and the evolution of the universe is the complex process, cosmological repulsion does not appear to be the main effect determining this process, and other effects (e.g., gravity, microwave background and cosmological nucleosynthesis) may play a much larger role. Chapter 4 Open problems As noted by Dyson in his fundamental paper [6], nonrelativistic theory is a special degenerate case of relativistic theory in the formal limit c → ∞ and relativistic theory is a special degenerate case of dS and AdS theories in the formal limit R → ∞ and, as shown in Sec.2.4, dS symmetry is more general than AdS one. The paper [6] appeared in 1972, i.e., more than 50 years ago, and, in view of Dyson's results, a question arises why general particle theories (QED, electroweak theory and QCD) are still based on Poincare symmetry and not dS one.Probably physicists believe that, since, at least at the present stage of the universe, R is much greater than even sizes of stars, dS symmetry can play an important role only in cosmology and there is no need to use it for description of elementary particles. We believe that this argument is not consistent because usually more general theories shed a new light on standard concepts.It is clear from the discussion in Sec.2.4 that the construction of dS theory will be based on considerably new concepts than the construction of standard quantum theory because in dS theory the concepts of particles, antiparticles and additive quantum numbers (electric charge, baryon quantum number and others) can be only approximate. Another problem discussed in a wide literature is that supersymmetric generalization exists in the AdS case but does not exist in the dS one.It may be a reason why supersymmetry has not been discovered yet. In [2] we have proposed a criterion when theory A is more general (fundamental) than theory B: Definition: Let theory A contain a finite nonzero parameter and theory B be obtained from theory A in the formal limit when the parameter goes to zero or infinity.Suppose that with any desired accuracy theory A can reproduce any result of theory B by choosing a value of the parameter.On the contrary, when the limit is already taken then one cannot return back to theory A and theory B cannot reproduce all results of theory A. Then theory A is more general than theory B and theory B is a special degenerate case of theory A. As shown in [2], by using this Definition one can prove that: a) nonrelativistic theory is a special degenerate case of relativistic theory in the formal limit c → ∞; b) classical theory is a special degenerate case of quantum theory in the formal limit h → 0; c) relativistic theory is a special degenerate case of dS and AdS theories in the formal limit R → ∞; d) standard quantum theory (SQT) based on complex numbers is a special degenerate case of finite quantum theory (FQT) based on finite mathematics with a ring or field of characteristic p in the formal limit p → ∞. As noted in Sec.1.2, the properties a)-c) take place in SQT, and below we will discuss the property d).As described in Secs.2.2 and 2.3, in IRs of the AdS algebra, the energy spectrum of the energy operator can be either positive or negative while in the dS case, the spectrum necessarily contains energies of both signs.As explained in Sec.2.4, for this reason, the dS case is more physical than the AdS one.We now explain that in the FQT analog of the AdS symmetry the situation is analogous to that in the dS case of SQT.For definiteness, we consider the case when p is odd. By analogy with the construction of positive energy IRs in SQT, in FQT we start the construction from the rest state, where the AdS energy is positive and equals µ.Then we act on this state by raising operators and gradually get states with higher and higher energies, i.e., µ+1, µ+2, ....However, in contrast to the situation in SQT, we cannot obtain infinitely large numbers.When we reach the state with the energy (p−1)/2, the next state has the energy (p−1)/2+1 = (p+1)/2 and, since the operations are modulo p, this value also can be denoted as −(p − 1)/2 i.e., it may be called negative.When this procedure is continued, one gets the energies −(p − 1)/2 + 1 = −(p − 3)/2, −(p − 3)/2 + 1 = −(p − 5)/2, ... and, as shown in [2], the procedure finishes when the energy −µ is reached. Therefore the spectrum of energies contains the values (µ, µ + 1, ..., (p − 1)/2) and (−µ, −(µ + 1), ..., −(p − 1)/2) and in the formal limit p → ∞, this IR splits into two IRs of the AdS algebra in SQT for a particle with the energies µ, µ + 1, µ + 2, ...∞ and antiparticle with the energies −µ, −(µ + 1), −(µ + 2), ... − ∞ and both of them have the same mass µ.We conclude that in FQT, all IRs necessarily contain states with both, positive and negative energies and the mass of a particle automatically equals the mass of the corresponding antiparticle.This is an example when FQT can solve a problem which standard quantum AdS theory cannot.By analogy with the situation in the standard dS case, for combining a particle and its antiparticle together, there is no need to involve additional coordinate fields because a particle and its antiparticle are already combined in the same IR. Since the AdS case in FQT satisfies all necessary physical conditions, it is reasonable to investigate whether this case has a supersymmetric generalization.We first note that representations of the standard Poincare superalgebra are described by 14 operators.Ten of them are the representation operators of the Poincare algebra-four momentum operators and six operators of the Lorentz algebra, and in addition, there are four fermionic operators.The anticommutators of the fermionic operators are linear combinations of the Lorentz algebra operators, the commutators of the fermionic operators with the Lorentz algebra operators are linear combinations of the fermionic operators and the fermionic operators commute with the momentum operators.However, the latter are not bilinear combinations of fermionic operators. From the formal point of view, representations of the AdS superalgebra osp (1,4) are also described by 14 operators -ten representation operators of the so(2,3) algebra and four fermionic operators.There are three types of relations: the operators of the so(2,3) algebra commute with each other as in Eqs.(1.2), anticommutators of the fermionic operators are linear combinations of the so(2,3) operators and commutators of the latter with the fermionic operators are their linear combinations.However, representations of the osp(1,4) superalgebra can be described exclusively in terms of the fermionic operators.The matter is that anticommutators of four operators form ten independent linear combinations.Therefore, ten bosonic operators can be expressed in terms of fermionic ones.This is not the case for the Poincare superalgebra since it is obtained from the so(2,3) one by contraction.One can say that the representations of the osp (1,4) superalgebra is an implementation of the idea that supersymmetry is the extraction of the square root from the usual symmetry (by analogy with the treatment of the Dirac equation as a square root from the Klein-Gordon one).From the point of view of the osp(1,4) supersymmetry, only four fermionic operators are fundamental, in contrast to the case when in dS and AdS symmetries there are ten fundamental operators. As noted in Sec.2.3, in the approach when a particle and its antiparticle belong to the same IR, it is not possible to define the concept of neutral particles.For example, a problem arises whether the photon is the elementary particle.In Standard Model (based on Poincare invariance) only massless particles are treated as elementary.However, as shown in the seminal paper by Flato and Fronsdal [36] (see also [37]), in standard AdS theory, each massless IR can be constructed from the tensor product of two singleton IRs and, as noted in [2], this property takes place also in FQT.The concept of singletons has been proposed by Dirac in his paper [38] titled "A Remarkable Representation of the 3 + 2 de Sitter group", and, as discussed in [2], in FQT this concept is even more remarkable than in SQT.As noted in Sec.2.2, even the fact that the AdS mass of the electron is of the order of 10 39 poses a problem whether the known elementary particles are indeed elementary.In [2] we discussed a possibility that only Dirac singletons are true elementary particles. As explained in [2], in FQT, physical quantities can be only finite, divergences cannot exist in principle, and the concepts of particles, antiparticles, probability and additive quantum numbers can be only approximate if p is very large.The construction of FQT is one of the most fundamental (if not the most fundamental) problems of quantum theory. The above discussion indicates that fundamental quantum theory has a very long way ahead (in agreement with Weinberg's opinion [39] that a new theory may be centuries away).
v3-fos-license
2021-05-28T05:19:43.409Z
2021-05-01T00:00:00.000
235214727
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2218-1989/11/5/306/pdf", "pdf_hash": "84f20dba0e30d51be19b91dab90da3b3b673d75b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:31", "s2fieldsofstudy": [ "Medicine" ], "sha1": "84f20dba0e30d51be19b91dab90da3b3b673d75b", "year": 2021 }
pes2o/s2orc
Glycolysis Metabolites and Risk of Atrial Fibrillation and Heart Failure in the PREDIMED Trial The increased prevalence of atrial fibrillation (AF) and heart failure (HF) highlights the need to better understand the mechanisms underlying these cardiovascular diseases (CVDs). In the present study, we aimed to evaluate the association between glycolysis-related metabolites and the risk of AF and HF in a Mediterranean population at high risk of CVD. We used two case–control studies nested within the PREDIMED trial. A total of 512 incident AF cases matched to 734 controls, and 334 incident HF cases matched to 508 controls, were included. Plasma metabolites were quantified by using hydrophilic interaction liquid chromatography coupled with high-resolution negative ion mode MS detection. Conditional logistic regression analyses were performed. The results showed no association between baseline plasma glycolysis intermediates and other related metabolites with AF. Only phosphoglycerate was associated with a higher risk of HF (OR for 1 SD increase: 1.28; 95% CI: 1.06, 1.53). The present findings do not support a role of the glycolysis pathway in the pathogenesis of AF. However, the increased risk of HF associated with phosphoglycerate requires further studies. Introduction Heart failure (HF) and atrial fibrillation (AF)-the most common type of arrhythmiahave emerged as major cardiac public health problems. The increasing prevalence of both conditions is associated with both increased morbidity and mortality [1,2]. HF and AF often coexist, and the two dysfunctions share many common risk factors including age, obesity, type 2 diabetes (T2D), and hypertension [3]. However, the traditional risk factors do not completely explain all AF and HF cases, and a deeper knowledge of the pathophysiology of HF and AF is needed [4,5]. In this sense, metabolomics could enhance our understanding of their pathogenic pathways, and help develop preventive strategies through the identification of novel risk biomarkers for these complex diseases. In patients with AF and HF, perturbations of the glycolysis metabolism have been detected [6,7]. Nevertheless, it is still debated whether altered glycolysis metabolism happens and is implicated before the development of AF and HF, or whether it is a consequence of these diseases [8]. So far, prospective metabolomic studies on AF and HF have focused on a broad range of metabolites involved in various pathways, and just a few have also included glycolysis-related metabolites. In the Framingham Heart Study, sucrose, pyruvate, glucose/fructose/galactose, phosphoglycerate, α-glycerophosphate, phosphoenolpyruvate, and lactate intermediate glycolysis metabolites were not significantly associated with an increased risk of AF [9]. Similarly, in the Atherosclerosis Risk in Communities (ARIC) study, no association was observed between glycerol 3-phosphate, lactate, and AF incidence [10]. Regarding HF, the evidence is even more limited. Only the ARIC study has evaluated the association between plasma lactate and incident HF, showing an increased risk in those individuals located in the highest quartile compared to those in the lowest [11]. As a result, current knowledge about the association between intermediate metabolites of the glycolysis pathway and the development of HF and AF is scarce and inconclusive. Our aim was to examine associations between baseline glycolysis intermediates (3-phosphoglycerate, hexose monophosphate, fructose/glucose/galactose, lactate, sucrose, α-glycerophosphate, phosphoenolpyruvate, and the ratio of phosphoenolpyruvate to lactate) and other related metabolites (hexose monophosphate and sucrose) and the risk of incident AF and HF in the PREDIMED study. Table 1 depicts the baseline characteristics of the study population for the two casecontrol studies. AF cases were more likely to have hypertension, HF cases were more likely to present diabetes, and both had higher BMI compared with controls. Table 2 shows the association between baseline plasma metabolites and the risk of AF incidence. Overall, no significant associations were observed between any of the metabolites analyzed and AF. No significant interactions were observed except for the association between α-glycerophosphate and AF, according to the intervention groups. The matched OR and 95% CI per 1-SD increment in the control group was 1.48 (1.11, 1.99), whereas no significant association was observed in the intervention (MedDiet+EVOO and MedDiet+mixed nuts combined) group. Baseline Glycolysis Intermediate Metabolites, Other Related Metabolites, and Risk of HF Associations between baseline plasma metabolites and HF risk are shown in Table 3. In the fully adjusted model, we only observed a significant association per one SD increment in levels of phosphoglycerate (OR for 1 SD increase : 1.28; 95% CI: 1.06, 1.53), whereas no association was observed when this metabolite was modeled as quartiles. Fructose/glucose/galactose metabolites were associated with an increased risk of HF in the crude model (adjusted for age, sex, and recruitment center by the matching approach of controls) when modeled as continuous (OR for 1 SD increase : 1.17: 95% CI: 1.01, 1.35), but this association did not reach statistical significance after multivariate adjustment. Similarly, sucrose was significantly associated with higher risk of HF when modeled as both quartiles (OR: 1.92; 95% CI: 1.26, 2.94 for Q4 vs. Q1) and continuous (OR for 1 SD increase : 1.26; 95% CI: 1.08, 1.47) in the raw analysis, but the association disappeared in the multivariate model. We found a significant statistical interaction (p-value < 0.05) between the intervention group (MedDiet groups merged vs. control group) and phosphoglycerate and sucrose when modeled as continuous. The risk of AF was higher in participants allocated to the control group (OR for 1 SD increase : 1.67; 95% CI: 1.21, 2.30, and OR for 1 SD increase : 1.48; 95% CI: 1.11, 1.99 for phosphoglycerate and sucrose, respectively). We also observed a significant statistical interaction with T2D for phosphoglycerate, showing a higher risk of AF in those individuals with T2D (OR for 1 SD increase : 1.57; 95% CI: 1.24, 1.98). No other interactions were observed between baseline metabolites, the intervention group, and T2D. Baseline Glycolysis Intermediate Metabolites, Other Related Metabolites, and Risk of AF and HF by Diabetes Status Associations between baseline plasma metabolites and AF and HF risk by diabetes status are shown in Table 4. No associations were observed between a 1-SD increase in any of the metabolites and AF by diabetic status. Regarding HF risk, only a significant positive association was observed between a 1-SD increase in phosphoglycerate in participants with diabetes (OR for 1 SD increase : 1.57; 95% CI: 1.24, 1.99). No other significant associations were observed. Table 4. Association between baseline plasma glycolysis species (per 1-SD increase) and other related metabolites, and incident atrial fibrillation and heart failure, by diabetes status. Discussion The results of the present analysis, which included two case-control studies nested within the PREDIMED trial, showed no association between plasma glycolysis and related metabolites and AF risk. Phosphoglycerate was the only metabolite associated with a higher risk of HF when modeled as a continuous variable. Of note, interactions between phosphoglycerate and the intervention group and T2D status were found, showing that the adverse effect of this metabolite on HF was limited to the control group and individuals with T2D. Previous studies have reported glucose oxidation impairment and increased glycolysis activity in patients with HF, which is reflected by higher levels of pyruvate and/or lactate compared to healthy controls [8,[12][13][14]. It has been suggested that these changes in myocardial metabolism may precede cardiac anatomical modifications that lead to heart failure [15]. However, it is still unclear if altered glycolysis metabolic profiling could help to identify those patients at high risk for AF and HF. To the best of our knowledge, this is the first study specifically conducted to evaluate the association between baseline glycolysis intermediates and other related metabolites and the risk of AF and HF. In agreement with our findings, in the Framingham Heart Study, plasma sucrose, phosphoglycerate, α-glycerophosphate, PEP, pyruvate, glucose/fructose/galactose, and lactate were unassociated with AF development [9]. Similarly, no associations between serum glycerol 3-phosphate, lactate, and glucose identified through a non-targeted metabolomic approach and the risk of AF were reported in the ARIC study [10]. Although these results suggest that glycolysis intermediate metabolites do not influence the development of AF, the paucity of studies in this field does not allow us to draw solid conclusions. Regarding HF, studies are even scarcer. To the best of our knowledge, only one metabolomic study, conducted in the framework of the ARIC study, has analyzed the association between plasma lactate and incident HF, showing that individuals in the highest quartile had a 35% increased risk of HF development compared to those in the lowest quartile [11]. The discrepancies in our results could be due to the method used to analyze lactate. In the ARIC study plasma lactate was analyzed using an enzymatic reaction to convert lactate to pyruvate. However, in the present study, lactate was measured using hydrophilic interaction liquid chromatography, coupled with high-resolution negative ion mode mass spectrometry detection. Of note, the higher risk of HF associated with phosphoglycerate observed in the present analysis deserves further study since, as far as we know, this is the first time that this association has been reported. Importantly, serum uric acid-a putative modulator of carbohydrate and lipid metabolism [16]-has been recently found to be an independent risk factor for both HF and fatal HF in an Italian cohort study of more than 20,000 subjects [17]. The present study has some limitations that should be considered. First, the study participants comprised elderly individuals at high risk of CVD, making the generalization of the findings difficult. Second, we do not have information for pyruvate, the end product of glycolysis, because it was one of the metabolites most susceptible to column degradation during the HILIC-neg analyses and did not pass the quality control process. However, to overcome this important limitation, we calculated the PEP to lactate ratio, so as to have an indirect measure of pyruvate. Third, we did not adjust our analyses for HF for the use of diuretics, which is common in these patients. Fourth, although metabolomics have the advantage of detecting a broad range of metabolites that can have important clinical utility in relation to disease prediction, blood and urine metabolites cannot fully inform us about the metabolic changes occurring only in the myocardium or any particular organ since, for this purpose, the analysis of changes in metabolites across the organ must be assessed [18]. Participants included in the current study presented other comorbidities, such as obesity or type 2 diabetes, which are diseases accompanied by pathological changes in multiple systems and organs, and previously associated with glycolysis-related metabolites [19]. This fact could contribute to the plasma metabolomics profile obscuring the true association between glycolysis intermediate metabolites and AF and HF risk. Study Design and Participants The PREDIMED study (ISRCTN35739639) was a randomized, multicenter, parallelgroup clinical trial conducted in Spain to evaluate the effectiveness of two Mediterranean diets (one supplemented with extra virgin olive oil and the other with mixed nuts) on the primary prevention of cardiovascular disease (CVD), compared to a low-fat control diet. The design and methods of the PREDIMED study can be found elsewhere (PRED-IMED website: http://www.predimed.es (accessed on 1 April 2021), and [20]). Briefly, 7447 elderly participants aged 55-80 years at high risk of CVD were recruited, from 2003 to 2009, and allocated to one of the three possible intervention groups: MedDiet supplemented with EVOO, MedDiet supplemented with mixed nuts, and low-fat control group. Eligible participants were men and women free from CVD at baseline who reported either T2D or at least three major risk factors including smoking, elevated LDL cholesterol levels, low HDL cholesterol levels, hypertension, overweight or obesity, or a family history of premature coronary heart disease. For the present analysis, we used two case-control studies nested within the PRED-IMED trial. Figure 1 shows the flowchart for both case-control studies. A total of 512 and 334 incident cases of AF and HF were ascertained, respectively, after excluding prevalent cases, incident cases without plasma samples, and cases without HILIC-negative metabolites. We selected matched controls by using the incidence density sampling with replacement [21]. This method involves randomly matching each case to a sample of all of those participants who are at risk at the time of the occurrence of the incident case. Selected controls could be sampled again as a control for future cases, and may later become cases themselves [22]. One to three controls per case were matched by year of birth (±5 years), sex, and recruitment center. The number of controls was 734 and 508 for AF and HF, respectively (Figure 1). Sample Collection and Metabolomic Analysis Participants provided fasting blood samples at their baseline visits, which were processed to obtain plasma and stored at −80 • C at each recruitment center until analysis. In order to reduce bias and inter-assay variability, samples from case-control pairs were randomly sorted and analyzed in the same batch. Intermediate glycolysis metabolitesphosphoglycerate (HMDB0000807), fructose/glucose/galactose (HMDB0000122), lactate (HMDB0000190), α-glycerophosphate (HMDB0000126), and phosphoenolpyruvate (HMDB0000263)-and other related metabolites-hexose monophosphate (HMDB0000124) and sucrose (HMDB0000258)-were measured using hydrophilic interaction liquid chromatography coupled with high-resolution negative ion mode mass spectrometry detection (HILIC-neg), as previously described [23,24]. Specifically, HILIC analyses of water-soluble metabolites in the HILIC-neg mode were conducted using an LC-MS system composed of an ACQUITY UPLC system (Waters) and a QTRAP 5500 mass spectrometer (SCIEX). Plasma samples (30 µL) were prepared via protein precipitation, with the addition of 4 volumes of 80% methanol containing inosine-15N4, thymine-d4, and glycocholate-d4 internal standards (Cambridge Isotope Laboratories). Samples were centrifuged for 10 min at 9000× g, maintaining a stable temperature of 4 • C, and the supernatants were injected directly into a 150 × 2.0 mm Luna NH2 column (Phenomenex). The column was eluted at a flow rate of 400 µL/min with initial conditions of 10% mobile phase A (20 mmol/L ammonium acetate and 20 mmol/L ammonium hydroxide in water) and 90% mobile phase B (10 mmol/L ammonium hydroxide in 75:25 v/v acetonitrile/methanol), followed by a 10-min linear gradient to 100% mobile phase A. MS analyses were performed using electrospray ionization and selective multiple reaction monitoring scans in the negative ion mode. To enable assessment of data quality, and to facilitate data standardization across the analytical queue and sample batches, pairs of pooled plasma reference samples were analyzed at intervals of 20 study samples. One sample from each pair of pooled references served as a passive quality control sample to evaluate the analytical reproducibility for measurement of each metabolite, while the other pooled sample was used to standardize using a "nearest neighbor" approach, as previously described [25]. Standardized values were calculated using the ratio of the value in each sample over the nearest pooled plasma reference multiplied by the median value measured across the pooled references. Pyruvate (HMDB0000243) was one of the metabolites most susceptible to column degradation during the HILIC-neg analyses, and did not pass the quality control process. Therefore, information about this metabolite was not included in the present analyses. Similarly, hexose diphosphate (HMDB0001058) was not analyzed because of the high amount of missing values (>60%). We also calculated the PEP to lactate ratio. All metabolomic analyses were performed at the Broad Institute of MIT & Harvard. Outcome Assessment AF and HF were considered to be secondary endpoints in the PREDIMED trial protocol (a composite of non-fatal myocardial infarction, stroke, and cardiovascular disease death being the primary endpoint). In the present analysis we considered all of the incident cases diagnosed between 2003 and December 2017. Participants from one recruitment center were censored at December 2014 to be selected as controls, since in this center the follow-up was stopped prematurely. Physicians blinded to the intervention group collected information related to these outcomes from continuous contact with participants and primary health care physicians, yearly follow-up visits, and annual ad hoc reviews of medical charts and consultation of the National Death Index. When a clinical diagnosis of AF or HF was made, the corresponding clinical records of hospital discharge, outpatient clinics, and family physicians' records were collected. The medical charts were codified with an alphanumeric code and sent anonymously to the clinical endpoint adjudication committee, which adjudicated the events according to prespecified criteria. All relevant documents were independently appraised by two cardiologists, and any disagreement on the classification of the event was solved by contacting a third cardiologist (the committee's chair). In some cases, further data were demanded in order to complete the adjudication. AF cases were identified from an annual review of all medical records of each participant, and annual electrocardiograms (ECG) performed at yearly follow-up visits. When AF was present in the ECG or quoted anywhere in the medical reports, the relevant documents were sent to the clinical endpoint committee for their evaluation. We did not include AF events associated with myocardial infarction or cardiac surgeries. HF diagnosis was made in accordance with the 2005 guidelines of the European Society of Cardiology [26]. Covariates Assessment At baseline, a general questionnaire about lifestyle, medical history, educational level, medication use, and previous history of diseases was administered to all participants. The Spanish version of the Minnesota Leisure Time Physical Activity Questionnaire [27] was used to assess physical activity. Trained study personnel took anthropometric measurements, and BMI was estimated as weight divided by height squared (kg/m 2 ). Statistical Analysis We normalized and scaled all individual metabolites in multiples of 1 SD using Blom's inverse normal transformation [28]. The baseline characteristics of the study population were presented for cases and controls expressed as mean ± SD for quantitative traits and n (%) for categorical variables. The participants were divided into quartiles of intermediate glycolysis metabolites, with cut-off points estimated according to the distribution of metabolites among the controls (participants who did not develop HF or AF during the follow-up). Matched odds ratios (OR) and their 95% confidence intervals (CIs), considering the first quartile as the reference category, were calculated using conditional logistic models, which considered the matching between cases and controls. We also calculated the matched ORs for 1-SD increments of each metabolite, including them as continuous variables. We performed crude models (matched by sex, age, and recruitment center) and multivariate models adjusted for the intervention group (MedDiet vs. control group), body mass index (kg/m 2 ), smoking (current, former, or never), leisure time physical activity (metabolic equivalent tasks in min/d), prevalent chronic conditions at baseline (dyslipidemia, hypertension, and T2D), family history of coronary heart disease, education level (primary or lower/secondary or higher), and medication for dyslipidemia, hypertension, and T2D. The test for interactions between individual metabolites as continuous (per 1-SD increment) and the intervention group (both MedDiet groups combined vs. the control group) or T2D prevalence was performed by means of likelihood ratio tests. All statistical procedures were performed using R v. 3.6.3 statistical software, and a two-sided P-value of less than 0.05 was considered significant. Data Availability Statement: Data described in the manuscript, code book, and analytic code will not be made publicly available. PREDIMED data can be requested by signing a data sharing agreement as approved by the relevant research ethics committees and the steering committee of the PREDIMED trial (www.predimed.es (accessed on 1 April 2021)).
v3-fos-license
2020-10-15T13:05:31.047Z
2020-10-13T00:00:00.000
222351698
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0238729&type=printable", "pdf_hash": "b7bb75ced1fd13534d0963577aac971d3738ba7a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:33", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "cfa9a3284d674a651502feb3bee13bdc7a4b55e1", "year": 2020 }
pes2o/s2orc
The role of ecological niche evolution on diversification patterns of birds distinctly distributed between the Amazonia and Atlantic rainforests The Amazonian and Atlantic Forest share several organisms that are currently isolated but were continuously distributed during the Quaternary period. As both biomes are under different climatic regimes, paleoclimatic events may have modulated species' niches due to a lack of gene flow and imposing divergent selection pressure. Here, we assessed patterns of ecological niche overlap in 37 species of birds with disjunct ranges between the Amazonian and Brazilian Atlantic Forests. We performed niche overlap analysis and ecological niche modeling using four machine-learning algorithms to evaluate whether species' ecological niches evolved or remained conserved after the past South American biogeographic events. We found a low niche overlap among the same species populations in the two biomes. However, niche similarity tests showed that, for half of the species, the overlap was higher than the ones generated by our null models. These results lead us to conclude that niche conservatism was not enough to avoid ecological differentiation among species even though detected in many species. In sum, our results support the role of climatic changes in late-Pleistocene—that isolated Amazon and the Atlantic Forest—as a driving force of ecological differences among the same species populations and potential mechanism of current diversification in both regions. Introduction The Quaternary paleoclimatic cycles of glaciation and interglaciation events profoundly affected neotropical rainforests ecosystems [1][2][3][4][5][6]. During not allow us to classify them, a priori, with exclusively Amazonian or Atlantic populations. Thus, we excluded those species from our study. We built our database from occurrence records available at the Global Biodiversity Information Facility (http://www.gbif.org), eBird (http://ebird.org/content/ebird/), Museu Paraense Emílio Goeldi, VertNet (http://vertnet.org/) and, WikiAves (http://www.wikiaves.com/). For each species, we compiled information on geographical coordinates, country, state, municipality, biome, genus, epithet, scientific name, sources, and identification number (voucher). We excluded species with less than ten geographically unique occurrence records in each biome, considering the resolution of the climatic variables (see below). We drew a minimum convex For model construction, we partitioned species' occurrences according to their origin: Amazon or Atlantic Forest. We then used the data from one region to fit the model and evaluated its accuracy using occurrence data from the other biome. We performed this analysis on both ways, in which models fitted using occurrences from the Amazon were evaluated with data from the Atlantic Forest. Likewise, models fitted with Atlantic forest data were evaluated using data from the Amazon. By adjusting the model for one population and evaluating its effectiveness when predicting the other population (reciprocal analysis), we obtain an estimate of niche change, since these models characterize the niche of the species. Since we geographically partitioned species' data, there is a risk of artificially creating a sampling bias if pseudoabsence and background allocation are also not restricted [41]. In order to avoid this bias, we delimited the accessible area for each subset by calculating the minimum convex polygon and adjusting the buffer around the polygon [42]. The pseudo-absences selection method was used through a simple bioclimatic presence model-Bioclim [43, 44]-to randomly allocate the pseudo-absences with the environmental profile and accessible areas to the species along with the same number of presences for each species. Pseudo-absences were randomly sampled with the environmental profile (RSEP; [45]) restricted to species' accessible areas [46]. We built niche models using the following machine-learning algorithms, based on presence and pseudo-absences: Support Vector Machine (SVM), Gaussian (GAU), and based on Presence and Background, Random Forest (RDF) and, General Linear Models (GLM): i). The SVM method belongs to the family of generalized linear classifiers. This method obtains a limit around the database, rather than estimating probability density [47], characterized by minimizing the probability of classifying patterns not observed by the probability distribution of the data erroneously [48]. The GLM method works with logistic regression, based on the relationship between the mean response variable and the linear combination of explanatory variables, suitable for ecological relationships analysis when dealing with non-linear data structures [49]. The GAU method is a probabilistic model in which Gaussian distribution models both the spatial dependency and the binary observation, generating the corresponding Gaussian variable [50]. The RDF method is an effective tool in forecasting, which uses individual predictors and correlations [51]. The RDF method builds several classification trees relating presences and absences to the environmental variables, and then combine all predictions based on trees frequency of classification [52]. The modeling process was carried out using the ENMTML package [53]. We used the Receiver Operating Characteristic (ROC) threshold, which balances commission and omission errors (sensitivity and specificity), to transform the suitability matrices into absence-presence maps. The area below the ROC threshold is known as AUC and serves as an evaluation measure of the model independently from the chosen limit [54]. The AUC values range from 0 to 1, with values below 0.5 indicating that the model has no better efficacy than a randomized distribution; values between 0.5-0.7 indicate acceptable accuracy while values between 0.7-0.9 indicate good accuracy. Lastly, the values above 0.9 indicate optimal predictions [55]. This procedure has criticisms about its use because it omits information about the goodness of model performance, the uncertainty of false positives, and their spatial distribution dimensions [56][57][58]. However, we used this evaluation method for providing results that optimize the probability thresholds by maximizing the percentage of actual presence and absences [56], as well as being widely used in niche modeling studies [48, [59][60][61][62][63]. Finally, to mitigate the errors and uncertainties of individual models, we used the ensemble technique, which consists of averaging out the best models for each species to generate a more robust prediction [64]. Analysis of the species' climatic niche We used the 19 bioclimatic variables obtained from WorlClim to compare the niche of 37 species that have populations in both Amazon and the Atlantic Forest. We performed a calibrated PCA in the entire environmental space of the study ( [67], PCA-env) to measure the niche overlap of target species based on the use of the available environment. This method quantifies the overlap of climatic niche involving three steps: (1) calculation of the density of occurrence points and environmental factors along the environmental axes, using a multivariate analysis, (2) measurement of niche overlap over gradients of the multivariate analysis and (3) analysis of niche similarity through statistical tests [27]. We then calculated the niche overlap between the Amazonian and Atlantic populations using Schoener's D metric (1970). This metric ranges from 0 (without overlap) to 1 (complete overlap) for each pair of disjunctly distributed species [27]. Then, to evaluate the hypothesis of niche evolution or conservation, the method performs two different routines of randomization [27], through two distinct components in the niche comparison: the equivalence test and the similarity test. The niche equivalence test verifies whether the niche overlap between populations is constant by randomly relocating the occurrences of each species of both biomes [27]. The similarity test assesses whether related species occupy similarly, but rarely identical, niches [65]. In this test, we verify whether niche overlap values remain unchanged (1 to 2), followed by a reciprocal comparison (2 to 1) given a randomly distributed interval. We performed this test 1,000 times, which guarantees the rejection of the null hypothesis. However, if the Schoener's D ranges within 95% of the simulated density values, the null hypothesis-niche equivalence-cannot be rejected [27]. Among the distinct components in the niche comparison, in addition to the niche equivalence and similarity tests, it is also possible to assess niche stability, expansion and unfilling, when comparing the known distributions of the species in both biomes. The stability test represents the niche proportion of species populations in a biome superimposing the niche in the other biome, demonstrating the stability that species retain their niches in space and time [66]. The expansion test evaluates the niche proportion of species populations in one biome that does not overlap with the niche of those in another biome [66]. In expansion, species occupy areas with different climatic conditions than those in the compared niche [67]. The unfilling test evaluates niche expansion of the populations in climatically analogous areas, only partially filling the climatic niche in this area, not overlapping with the compared niche [68]. We considered threshold values above 0.7 as high (results representing at least 70% of the analogous niche), between 0.5 and 0.7 as partial, and below 0.5 as low (results representing at most 50% of the analogous niche), for evaluation of the stability, expansion and unfilling test results based on previous studies [54, [69][70][71][72][73][74]. Finally, we estimated how much the niche of each species evolved or remained conserved in the Amazonian and Atlantic populations. Ecological niche models (ENM) of birds with disjunct distribution between Amazon and the Atlantic Forest The ecological niche models of Amazonian populations presented higher values of AUC than those found for populations of the same species in the Atlantic Forest (Table 2). Such better performance was likely partially due to the higher number of occurrence records in the Amazon (n = 19,828, mean = 535.89) than in the Atlantic Forest (n = 3,490, mean = 94.32); which allowed a better characterization of the realized niche of Amazonian populations. Ecological niche models of Amazonian populations presented 57% (n = 21 species) of model with AUC values between 0.7 to 0.9 (considered a good performance), and 35% (n = 13) above 0.9 (considered optimal performance). Only 8% (n = 3) had AUC values between 0.5 and 0.7 (considered acceptable). For the Atlantic Forest ENMs, 73% (n = 27) had an acceptable prediction (AUC = 0.5-0.7), and 27% (n = 10) showed a good one (AUC = 0.7-0.9). The resultant potential distribution included both large and restricted distribution as in Antrostomus sericocaudatus Cassin, 1849, which is distributed from Southern Central America to Southern Uruguay and Discosura longicaudus (Gmelin, 1788) that is restricted to the north of the Atlantic Forest and North and Eastern Amazon. In general, models of Amazonian populations were able to predict suitable areas in Atlantic Forest for 36 species: 49% (n = 18) of them predicted suitable areas for the populations in both north and south of the Atlantic Forest; 38% only in the northern part of the Atlantic Forest (n = 14); and 11% (n = 4) predicted suitable areas exclusively in southern Atlantic Forest (Fig 2). Only for Monasa morpheus (Hahn & Küster, 1823), the model was unable to predict distribution beyond Amazon. Models of Atlantic Forest populations predicted suitable areas in Amazon for all 37 species analyzed. Western and Eastern Amazon regions were present in 14% (n = 5) of the predicted areas. The predicted area mostly covered Western Amazon (76%; n = 28), and less frequently, it included the eastern region of this biome (11%; n = 4). Predictions in the Central-South Region of the Cerrado appeared in 22% (n = 8) of the Amazonian models and 38% (n = 14) of Atlantic models. Niche comparison between birds with disjunct distribution The first two axes of PCA-env accounted for 70% of the environmental variation in the studied areas. It can be drawn from Table 3 PLOS ONE The role of ecological niche evolution on speciation patterns of birds Amazonia and Atlantic rainforests 3). Even though species had low niche overlap values, niche similarity tests indicate that 54% (n = 20) of the species have a more similar niche than would be expected just by chance (p < 0.05). Stability between Amazonian and Atlantic niches was generally high (mean = 0.81; SD = 0.27). Only 13.5% (n = 5) of the species showed either no or low niche stability (<0.5). Expansion in the Atlantic populations' niche compared to Amazonians was only detected in 21% ( Discussion Our results indicate that bird populations that have disjunct distribution in the Amazon and Atlantic Forest show signs of Grinellian niche divergence, mainly supported by the low niche overlap among populations of the same species. Although, underlying processes of niche conservatism seemingly constrain niche evolution in these species because for nearly half of the studied species, observed niche overlap-although small-tended to be higher than what would just be expected by chance (similarity test results). Results from the ecological niche models also confirm that the dry diagonal prevents genetic flow between these two forests, as suitable areas almost always fall within the distribution of current forested regions. [24] reviewed previous tests of niche conservatism in a temporal context, where he found that most of them did not show considerable niche divergence in the time frame examined here (Pleistocene). Our results represent one of the few examples where niche divergence can occur under the such short evolutionary time. One primary mechanism is the lack of gene flow between the populations-supported by phylogeographic studies [75][76][77][78][79] and further inferred by our ecological niche models-that prevent swamping adaptations to the climatic regime characteristics of each forest [26,80]. Indeed, Atlantic Forest presents more climatic variation and lower temperatures and a smaller volume of precipitation than Amazon [5]. As observed in Fig 2, species niche centroid changes tend to follow the changes in the environmental centroid available in the accessible region for the populations. When comparing the predictive capacity of the ecological niche models built with Amazonian records, we observed that, for most species, predicted areas agree with the current pattern of occurrence observed in the Atlantic Forest. The same is true for the models of the Atlantic Forest population. These results support a general species niche conservatism of forest habitats constantly recreated by either population, even when their specific niches do not overlap. Niche conservatism-as a process-isolates the populations between Amazon and Atlantic Forest because it besets species adaptation to the conditions found at the dry diagonal. Atlantic populations' niche showed in general high unfilling, small expansion, and high stability, taking the niches of Amazonian populations as a reference. In other words, niches of the Atlantic Forest populations resemble a subset of that in Amazonian populations. These results support previous genetic evidence of an Amazonian origin of these species [81], which, PLOS ONE The role of ecological niche evolution on speciation patterns of birds Amazonia and Atlantic rainforests coupled with the process of niche conservatism, would explain the observed pattern-although some species defy this general interpretation (e.g., A. sericocaudatus and C. laeta). As previously pointed out by several authors (e.g., [82]), to confirm or not the presence of niche conservatism is not a fundamental approach (although it should surely be tested, see [83]) as to examine the possible consequences of niche conservatism as an ecological and evolutionary process. [26] further explored this topic and proposed that there is a conceptual misinterpretation that niche conservatism presumes the propensity of species to retain the ancestral niche; instead, species would retain their current niche. They called it the instantaneous niche retention, which is a key concept because, when geographic distance also reflects environmental distance (as in this context), the lack of gene flow associated with divergent natural selection would lead populations to track its instantaneous niche [26]. Therefore, the niche would rapidly evolve as a resulting process of niche conservatism. These differences could be already driving speciation. For instance, phylogenetic studies indicate that some of the species in our study (such as A. sericocaudatus, C. laeta, G. spirurus, H. rubica, and X. minutus) are evolutionarily independent units with recognized subspecies in both biomes [75][76][77][78][79]. For instance, Glyphorynchus spirurus populations even have significantly different morphological and vocalization patterns [76]. Molecular clock techniques confirm that some of these populations seem to have diverged during the Pleistocene (e.g., C. laeta, G. spirurus, and H. rubica), although for some divergence may have happened before, during the Pliocene (e.g., X. minutus) [76][77][78][79]. Phylogenetic divergence during Pleistocene has also been observed in primates [84][85][86], snakes [87], rodents, and marsupials [21,88]. The diversification of these taxa is consistent with the cycles of isolation of rainforests due to the expansion of savannas during the Pleistocene [1,4,21,[84][85][86][88][89][90], supporting this mechanism as an essential current driver of diversification in the Amazon and Atlantic Forest. Still, it is crucial to bear in mind that the observed niche divergence is not only a result of the most recent isolation of the two forests but likely to be a product of the long process of isolation and recurrent formation of secondary contact zones following the climatic cycles of the quaternary. Accordingly, we advise some caution in inferring the exact time of niche evolution here. Also, as pointed by [91], if both the lack of gene flow (by allopatry or the development of reproductive isolation) and the divergent selection are not stable through time, the role of ecological speciation in driving diversification in the region will not sustain. Conclusion We observed low niche overlap among disjunct populations of the same species that inhabit the Amazon and the Atlantic Forest. However, our results suggest that in 53% of the examined species, the low niche overlap is still higher than predicted under a null model. In general, Grinnellian ecological niches of the population in the Atlantic Forest resemble, to a certain extent, a subset of that of the Amazonian population. However, it is worth noting that some remarkable niche expansions occurred in Atlantic Forest populations. While we have not observed much overlap among the studied species populations, ecological niche models generated with occurrence records of populations from one biome usually recovered the general distribution of populations present on the other. These results lead us to conclude that niche conservatism, while present in many species, was not enough to avoid ecological differentiation among species' Grinnellian ecological niches. In sum, our results support the role of climatic changes that happened at the end of the Pleistocene-that isolated Amazon and Atlantic Forest-as driving ecological differences among the same species populations, and it is also a key mechanism of ongoing diversification in both regions.
v3-fos-license
2023-10-04T20:02:56.555Z
2023-09-01T00:00:00.000
263619251
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://iopscience.iop.org/article/10.1088/1755-1315/1248/1/012019/pdf", "pdf_hash": "612e5298715f670e0953fa58f69a1efb236263d2", "pdf_src": "IOP", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:37", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "sha1": "612e5298715f670e0953fa58f69a1efb236263d2", "year": 2023 }
pes2o/s2orc
Analysis of Fertilizer Raw Materials and Environmental Degradation: Using Granger Causality This study aims to examine the relationship between fertilizer raw materials and environmental degradation using the Granger causality test. This study uses data from World Bank Commodity Prices for fertilizer raw materials in the form of price data for phosphate, potash, and natural gas in the U.S. Dollar/metric tons and environmental degradation in the form of CO2 Emissions from methane and process emissions data from British Petroleum (BP) from 1990-2021. The methods are stationarity test, cointegration test, and Granger causality test. The results of the analysis state that there is no bidirectional relationship between these variables, but there is a one-way relationship where environmental degradation affects the price of raw materials for potash and phosphate fertilizers, and the price of natural gas is affected by environmental degradation. The implication of this research is the need to apply incentives to producers and consumers of fertilizers in the use of fertilizers to improve environmental quality. Introduction One of the issues raised by the ill-regulated exploration of natural resources is environmental degradation [1]- [7].Environmental deterioration is one of the challenges that needs to be watched out for in the twenty-first century, according to the United Nations Environment Program (UNEP) [8].The social and economic situation will be impacted by how dirty the environment becomes.Previous studies from many nations have recorded and investigated the loss in environmental quality and its effects on economic growth [9]- [13].High CO2 emissions are linked to environmental damage through a number of agents.Particularly in the economic sector where production inputs rely on natural resources, environmental degradation can be felt directly [14]- [17].This led to the analysis of the causal link between CO2 emissions and fertilizer production inputs in the agriculture sector in the current study.[18], [19] From an agricultural standpoint, environmental degradation has certain effects [14], [20].The manufacturing and industrial sectors, for example, have an impact on the agricultural sector.The demand for fertilizer is a major driver of the agriculture sector.The majority of countries with abundant mineral resources that can be exported to other countries meet the demand for fertilizer raw materials worldwide [21].Natural gas is another essential raw element needed to create nitrogen for different kinds of fertilizers.The early 2022 natural gas trade, however, is not looking good due to the conflict between Ukraine and Russia, which sparked a cold war in Europe, and the trade conflict between the United States and China [21]- [24].In addition to raising CO2 emissions, the conflict between Ukraine and Russia also marked the start of an increase in fertilizer pricing by global fertilizer producers [25]- [28]. Natural Gas Export There has been numerous research done on the variables affecting emissions and fertilizers.However, it solely considers ready-to-use fertilizer and excludes the fluctuating cost of fertilizer's primary components.This paper focuses on the causal connection between emissions and the cost of phosphate, natural gas, and phosphorus fertilizer raw materials.As a result, this study uses historical data to analyze the long-term causal relationship between emissions and the cost of fertilizer raw materials.This paper's structure is as follows: Section 1 explains the background, while section 2 discusses the research method.Section 3 describes the results and discussion, and 4 presents conclusions. Research Method The World Bank and British Petroleum (BP) provide the data for this paper.This study makes use of time series data with price variables for the world's CO2 emissions, phosphate prices, potash prices, and natural gas prices.By examining historical data, this study employs the Granger causality test methodology to identify the causal relationships between factors and their long-term effects.In order to determine whether two research variables statistically influence one another, the Granger causality test is one of the solutions [30] The Granger Causality Analysis step is to carry out the stationarity test, cointegration test and determine the optimal lag length.The initial stage is to do a cointegration test between variables on the variables in Table 1. Results and Discussion The unit root test is carried out by conducting the ADF test on each variable.Based on the p-value of all variables having a value below α = 5% (0.05) then H0 is accepted.This means that all research variables are stationary.All variables were subjected to a unit root test at the first difference with a significance level of α = 5%.4, this study tested using the Granger causality test.Based on the results of the Granger causality test, it was found that natural gas has an effect on CO2 emissions but does not have a two-way relationship.Natural gas affects phosphate but does not have a bi-direction relationship.The findings of this study support a number of other studies in the fast-growing sector of agricultural business.Results from a few of this research revealed signs of the impact of utilizing fertilizers, particularly pesticides [31]- [33].The application of pesticides must adhere to accepted usage guidelines.The use of more than the advised dosage will result in a number of issues that may have an impact on plant health and environmental contamination.Insecticides' active ingredients enter the soil where they can kill microbes or decomposers.Subsequently, other active ingredients are leached, and the remaining active ingredients evaporate and release into the atmosphere.Pesticides produce chemical compounds that can damage the air in addition to the soil.When dangerous substances enter the body, residues that aren't broken down cause a number of chronic ailments, including cancer, digestive issues, and heart attacks because they restrict blood vessels [34], [35].Farming practices that use fertilizers and fertilizers that are not ecologically friendly cause degradation of soil and water quality in the upstream area.This can lead to changes in paddy fields and horticulture areas, which can impact the hydrological conditions. Conclusion According to the analysis's findings, there is only a one-way link between these variables in which the price of natural gas influences environmental degradation and the price of phosphate.There is no bidirectional relationship between these variables.This needs to be highlighted because according to several previous studies the addition of chemicals to the soil can provide benefits and disadvantages.This needs to be properly addressed from the side of farmers as producers and society as consumers, as well as the important role of academics, policy makers, and the government in taking the role of research and policy in the agricultural sector and its impact on the environment.The study's conclusion is that incentives must be used to encourage fertilizer producers and customers to use fertilizers to enhance environmental quality. TABLE 1 . Research Variables Regression model to test the hypothesis of the relationship between variable X dan Y where the errors in u_1t and u_2t are not correlated.Causality analysis is carried out with a number of things that must be met, including data variable X dan Y that pass the stationarity test and cointegration test.The steps to carry out the Granger causality test are to carry out stationary tests and cointegration tests.Equation 2.3 is the stationarity test equation and equation 2.4 is the cointegration test equation.The stationarity test is used to test the stationarity of the time series data studied using Augmented Dickey-Fuller Test (ADF test).The cointegration test functions to test the conditions for the degree of stationarity of the data. Where: X is variable X; Y is variable Y; α, β, λ, δ is coefficients; t is time; I and j is lag 1,2,3…k; and u is error. Table 3 is a table of cointegration test results for each variable.Based on these results, it is stated that through Johansen's cointegration test, this research can be carried out using the causality test.Based on the cointegration test results stated that there is no long-term relationship between variables.Meanwhile, in the short term, all variables adjust to each other to achieve long-term balance.
v3-fos-license
2018-04-03T01:06:12.991Z
2015-12-16T00:00:00.000
16377224
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/fsn3.324", "pdf_hash": "7caba2c79948d96118a7a793c222634f9eb9cccc", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:39", "s2fieldsofstudy": [ "Biology", "Chemistry", "Environmental Science" ], "sha1": "7caba2c79948d96118a7a793c222634f9eb9cccc", "year": 2015 }
pes2o/s2orc
A study on the physicochemical parameters for Penicillium expansum growth and patulin production: effect of temperature, pH, and water activity Abstract Penicillium expansum is among the most ubiquitous fungi disseminated worldwide, that could threaten the fruit sector by secreting patulin, a toxic secondary metabolite. Nevertheless, we lack sufficient data regarding the growth and the toxigenesis conditions of this species. This work enables a clear differentiation between the favorable conditions to the P. expansum growth and those promising for patulin production. A mathematical model allowing the estimation of the P. expansum growth rate according to temperature, a W, and pH, was also developed. An optimal growth rate of 0.92 cm/day was predicted at 24°C with pH level of 5.1 and high a W level of 0.99. The model's predictive capability was tested successfully on artificial contaminated apples. This model could be exploited by apple growers and the industrialists of fruit juices in order to predict the development of P. expansum during storage and apple processing. Introduction Filamentous fungi are broadly dispersed throughout the environment and are responsible for the spoilage and poisoning of several food matrices. The most common and widespread mycotoxigenic fungi are mainly triggered by the genera: Aspergillus, Fusarium, and Penicillium (Sweeney and Dobson 1999;Binder et al. 2007). Within the latter genus, Penicillium expansum is one of the most studied species (Andersen et al. 2004). Penicillium expansum is a wound parasite fungus that invades fruits via injuries, caused by unfavorable weather conditions before harvest (hail, strong wind) or by rough handling, harvesting, and transport (Sanderson and Spotts 1995). This ubiquitous fungus commonly found on pome fruits causes a serious postharvest disease known as blue mold rot and produces significant amounts of patulin, giving rise to substantial fruit losses and serious public health issues (Moake et al. 2005). Patulin is known to have potent cytotoxic, genotoxic as well as immunotoxic effects ORIGINAL RESEARCH A study on the physicochemical parameters for Penicillium expansum growth and patulin production: effect of temperature, pH, and water activity even at relatively low exposure levels (Puel et al. 2010). Therefore, the European Union has fixed a maximum tolerated level of 50 μg/kg for fruit juices and derived products and 25 μg/kg for solid apple products. The maximum level allowed for apple products intended for infants and young children was set at 10 μg/kg (European C, 2003(European C, , 2006. The understanding of P. expansum physiology under controlled experimental conditions may help forecast its behavior in natural conditions and predict its potential risks on the fruit sector and consumer health. In the last decades, predictive microbiology has emerged to be a useful tool in food industry used to predict the behavior of microorganisms through the development of several mathematical models capable of describing the responses of these pathogenic organisms to particular environmental conditions (Ross and McMeekin 1994;Fakruddin et al. 2011). Although it was more commonly used to control the bacterial growth (Gibson et al. 1988; Baranyi and Roberts 1994;Gaillard et al. 1998;Juneja et al. 2007), the situation has changed and this tool was lately employed in the modeling of fungal growth as well. The fungal proliferation and mycotoxin synthesis in foodstuffs are subject to multiple physicochemical parameters. The water activity (a W ), and the temperature adopted during the storage period deemed as the most imperative ones (Holmquist et al. 1983;Dantigny et al. 2005;Bryden 2007). Likewise, other intrinsic factors, particularly the pH of the product, can largely affect the mold development (Rousk et al. 2009). The combination of these physicochemical parameters along with the usage of modeling techniques might be helpful to control the fungal growth and subsequently the biosynthesis of mycotoxins. A growing number of studies are available in the literature dealing with the predictive modeling approach of fungi (Valık et al. 1999;Panagou et al. 2003;Parra and Magan 2004;Tassou et al. 2007;Garcia et al. 2011). For P. expansum in particular, few studies have been conducted to characterize the growth and the toxigenesis conditions of this species despite its large implication in foodstuff contamination. The growth rate of P. expansum has been studied as function of the storage temperature, the a W and the oxygen levels (Lahlali et al. 2005;Marín et al. 2006;Baert et al. 2007a;Judet-Correia et al. 2010).Moreover, its patulin production capacity has been independently assessed as a function of temperature, pH, and fruit varieties (Morales et al. 2008;Salomao et al. 2009). All these studies lack sufficient information about the simultaneous effects of such parameters on P. expansum growth and its patulin production capability. In this regard, it is worth mentioning that the most suitable conditions for the fungal growth may not be the optimal conditions for mycotoxin production, thus it is not possible to predict the latter from the kinetic growth data. Moreover, the interactive effects of different sets of abiotic factors cannot be predicted by such types of studies. With these perspectives, this study was undertaken to firstly determine in vitro the individual effects of three major physicochemical parameters; the temperature, pH, and a W on both the growth and patulin production by the blue-rot ascomycetous fungus, P. expansum. These data were subsequently invested in the development of a mathematical model which enables accurate prediction of optimal and marginal conditions for P. expansum growth. Experimental Fungal isolate This study was carried out on one strain of P. expansum, initially isolated from grapes in the Languedoc Roussillon region of France. The strain was previously characterized by DNA sequencing of the ITS gene region and deposited in ARS collection (USDA, Peoria, IL) as NRRL 35695. The strain was formerly confirmed as a patulin-producer (Tannous et al. 2014). Inoculum preparation The investigated strain was subcultured on Potato Dextrose Agar (PDA) medium (Biolife, Milano, Italy) and incubated at 25°C to obtain a heavily sporulating culture. The conidial suspension was prepared by washing the surface of the fresh, mature (7-day-old colony) culture with 10 mL of sterile distilled water amended with Tween 80 (0.05%, v:v) and by gently rubbing with a sterile loop. The spores' concentration was reckoned by microscopy using a Neubauer counting chamber, and then adjusted to 10 5 spores/μL. Growth media and incubation conditions All the assays were conducted on the synthetic Czapek Glucose agar medium in order to minimize other sources of variation that could be encountered on natural media and to identify clearly the effects of temperature, pH, and a W . This medium has already been proven to be a favorable substrate for P. expansum growth and patulin production (data not shown). The overall assayed conditions were five temperatures, three pH levels, and four a W values. Six separate replicate Petri plates were used for each temperature, a W, and pH value, three of which were overlaid with sterilized cellophane disks to ensure a good separation between mycelium and agar. This will allow an accurate estimation of the mycelial mass and the amount of patulin produced on agar medium (Reeslev and Kjoller 1995;Tannous et al. 2014). In all the experimental conditions, media were centrally inoculated with 10 6 spores from the spore suspension. For temperature investigations, the synthetic Czapek glucose agar medium was prepared based on the formulation reported by Puel et al. (2005). The inoculated Petri plates were incubated at 30, 25, 16, 8, and 4°C in high precision (±0.1°C) for 2 weeks. The synthetic Czapek glucose agar medium was also used for assessing the effect of a W on the growth and patulin production by P. expansum. The unmodified medium (a W 0.99) was adjusted to a W levels of 0.95, 0.90, and 0.85 by adding increasing amounts of glycerol. Water activities were subsequently determined with the HygroLab 2 water activity indicator (Rotronic, Hauppauge, NY). Petri plates of the same a W value were separately enclosed in polyethylene bags to prevent water loss. The inoculated Petri plates were incubated at 25°C for 2 weeks. The pH surveys were also conducted on Czapek glucose agar incubated at a constant temperature of 25°C for only 7 days. The pH of the medium was adjusted to 2.5, 4, and 7 using two buffer solutions (Citric acid (0.5 mol/L): Potassium Hydrogen Phosphate (0.5 mol/L)) in the respective combinations 49 mL: 2 mL, 30.725 mL: 38.55 mL, and 8.825 mL: 82.35 mL, for a total volume of 250 mL of medium. These pH values were chosen as they cover the pH range found in different eating-apple and cider apple varieties. The final pH of the medium was verified using a pH-meter with special probe for alimentary articles by Hanna instruments (Tanneries, France). Growth and lag phase assessment After inoculation, agar plates, harvested without cellophane disks, were checked on a daily basis to perceive if visible growth had started. As soon as a visible growth has begun, P. expansum growth was monitored by diameter measurements along two perpendicular directions, at regular time intervals. The lag phase (time required for growth) was evaluated and the radial growth rate (cm/day) was obtained from linear regression slopes of the temporal growth curves. Measurements were carried for an overall period of 14 days for the temperature and a W surveys and 7 days only for the pH surveys. Fungal growth was also evaluated with regard to biomass (mg dry weight). After the appropriate incubation period, the mycelia developed on the surface of agar plates topped with cellophane disks were scratched with a scalpel, collected, and dried at 80°C until a constant weight, corresponding to the dry biomass weight, was obtained. Patulin extraction and HPLC analysis After the appropriate incubation period (7 days for the pH assays and 14 days for the temperature and a W assays), the agar medium was scraped off the Petri dishes overlaid with sterile cellophane, cut into strips, mixed with 50 mL of ethyl acetate (Sigma-Aldrich, Saint-Quentin Fallavier, France) and macerated with agitation (250 rpm) at room temperature on an orbital shaker (Ningbo Hinotek Technology, Zhejiang, China). The contact time was 2 days. The organic phase was then filtered through Whatman Grade 413 filter paper (Merck, Darmstadt, Germany) and evaporated to dryness under liquid nitrogen. The dried residue was dissolved in 2 mL of methanol and then filtered through a 0.45 μm syringe filter (Navigator, Huayuan Tianjin, China) into a clean 2 mL vial. One hundred microliter aliquots of these extracts were injected onto the Waters Alliance HPLC system (Saint-Quentinen-Yvelines, France) for the quantitative determination of the patulin concentration. The patulin was detected with a Waters 2998 Photodiode Array Detector, using a 25 cm × 4.6 mm Supelco 5 μm Discovery C18 HPLC Column (Sigma-Aldrich) at a flow rate of 1 mL/min. A gradient program was used with water (Eluent A) and acetonitrile HPLC grade (eluent B) and the following elution conditions: 0 min 5% B, 16 min 2% B, 20 min 60% B, 32 min 5% B. The presence of patulin was monitored at a 277 nm wavelength. A calibration curve was constructed with patulin standard (Sigma-Aldrich) at concentrations ranging from 0.05 to 10 μg/mL. Accordingly, the patulin concentrations were determined and results were expressed in ppm. The LOD and LOQ of the method were calculated using the slope (S) of the calibration curve, obtained from linearity assessment, and the standard deviation of the response (SD). These values were determined as follows: LOD = 3.3 × SD/S, LOQ = 10 × SD/S. Model development Growth rate experimental data were implemented in a home developed C++ language program that is able to interpolate between various points in different or multiple dimensions. Effects of the different parameters (temperature, pH, and a W ) on the P. expansum growth rate were taken into account according to the experimental points already obtained. Thus, the effect of each of these parameters was considered on its own calculating the growth ratio factor effect obtained from the experimental data. The program proceeds as a simultaneous interpolator between the different data points and the effect ratios of each parameter on the growth rate. Therefore, the program allows us to estimate the growth rates (expressed in cm/ day) for fixed temperature, a W, and pH values depending on the variations defined by the input data. The model took into account the latency phase versus the temperature, which was modeled by a 4 th degree polynomial equation: (1) where T is the temperature parameter expressed in Degree Celsius (°C). Likewise, the latency versus the a W was taken into account to fit the following power equation: (2) where a w represents the water activity of the medium. In the both latency phase fits, the correlation parameter R 2 was higher than 0.97 showing a good accuracy of the fitting procedure. The P. expansum diameter growth (cm) versus time should theoretically follow a linear regression while considering the variation in each of the temperature, a W , and pH parameters. Therefore, a Pearson chi-squared test was performed confirming that our hypothesis is true for over 99.9%. Thus, the slope and the intercept dependencies on each of the previously mentioned parameters were calculated according to the available experimental points. Growth slope and intercept dependencies on temperature were fit into the following 4 th degree polynomial equations: (3) The growth slope and intercept dependencies on a W were fit into the following equations: And the growth slope dependency on pH was fit into the following equation, considering the value for intercept as null: All slope and intercept fits were convergent to more than 99% with the experimental points. In order to analyze the simultaneous effect of the different parameters, growth rate values were calculated for an a W level of 0.99 and a pH of 4 (reference values) using the temperature's formula given by equations (3) and (4). Using these equations the growth rate (cm/day) can be calculated for different temperature values. Using the same method of proceeding we can use equations (5) and (6) to calculate the effect of the a W on the growth rate and the equation (7) to analyze the effect of the pH. If two or more parameters are to be changed at the same time, the temperature effect is taken into account first, and then the obtained growth rate is further modified by the second parameter effect. The modification is a simple ratio factor that is applied to the growth rate following temperature change. Therefore, the effects of the a W and the pH were implemented as a diameter ratio factor. Each factor was calculated by dividing the diameter obtained for a desired parameter value by that obtained for the experimental values that we considered as a reference. Finally, a routine test of the different values combinations of pH, T, and a W was carried out in order to retrieve the highest growth rate and its relative optimal parameters to obtain such a result. Validation of the predictive model in vivo In order to assess the validity of the predictive model in natural conditions, three apple varieties (Golden Delicious, Granny Smith, and Royal Gala) with different initial pH values (Table 1) were used. As previously described by Sanzani et al. (2012), apples were surface-sterilized using a 2% sodium hypochlorite solution and rinsed with water. Apples were then injured using a sterile toothpick to a depth of approximately 0.5 cm, and the wounded sites were inoculated with a 10 μL droplet of the P. expansum conidial suspension at a concentration of 10 5 conidia/μL. The infected apples were then incubated for 2 weeks under three different temperatures (4°C, 25°C, and 30°C). The set of experimental conditions used to check the predictive capability of the model are given in Table 1. Duplicate analyses were performed on each set of conditions. Results and Discussion Studies on the growth and patulin production by P. expansum under different conditions Colony diameters were measured on a daily basis and plotted against time. For all the tested conditions, the growth curves based on colony diameters were typical of a linear fungal growth after a short lag period ranged from 1 to 7 days (Baert et al. 2007a). However, it is worth mentioning that the fungal growth was in some cases limited by the Petri plates' dimension. In such cases, growth curves lose their linear appearance just after reaching the limiting diameter value (~7 cm) (Figs. 1A, 3A). Under each culture condition, the patulin content was quantified by HPLC and expressed in ppm. Temperature effect Since in many cases, apples and other fruits are stored in refrigerators (at 4°C) or in plastic barrels outdoors where temperatures of 25-30°C are very common, the temperature analysis were performed within a 4 to 30°C range. The investigated strain of P. expansum was able to grow in the temperature range studied at unmodified a W and pH (Fig. 1A). Interestingly, the strain displayed a different colonial morphology under the five tested temperatures. At 8°C and 16°C, green colony with white margins and yellow to cream reverse side was observed, whereas at 25°C, the fungus showed green conidia with dull-brown color on reverse. An unusual morphology of the fungus was perceived at 30°C; the colonies grew vertically and stayed smaller than 3 cm, with serrated edges (Fig. 2). These morphological changes noticed following the incubation under various temperatures have been linked to stress response in other filamentous fungi (Verant et al. 2012). The optimal temperature for the growth of this strain of P. expansum was around 25°C, at which the fungus exhibited the shortest lag phase and the most important colony growth (8.9 cm), at the end of the incubation period (Fig. 1A). This observation is in accordance with the literature data that describes also an optimum growth temperature for this species near 25°C (Pitt et al. 1991;Lahlali et al. 2005;Baert et al. 2007a;Pitt and Hocking 2009). Lag phases prior to growth increased when temperature varied from optimum to marginal conditions; a lag phase of 6 and 3 days was noticed at the lowest temperatures of 4°C and 8°C, respectively, besides a 2-day-latency period perceived at the highest studied temperature (30°C) (Fig. 1A). This result supports the prediction of Baert et al. (2007a) that cold storage does not prevent the fruit deterioration by P. expansum, but just delays it. The colony growth rates were calculated as the slope of the linear segment of each growth curve. The growth rate of P. expansum as a function of temperature appears to have a bell-shaped distribution with an optimum at 25°C and an experimentally determined value of 0.67 cm per day (Fig. 1B). The growth features of P. expansum were also evaluated in terms of fungal biomass development. The highest mycelia dry weight of 160 ± 15 mg Patulin content (× 10 ppm) Patulin was identified by its retention time (9 min) and its UV spectra according to an authentical standard and quantified by measuring peak area according to the constructed standard curve of 0.919% coefficient of variation. The values of limit of detection (LOD) and limit of quantification (LOQ) for patulin were 0.04 μg/mL and 0.1 μg/mL, respectively. The patulin production by P. expansum exhibited also a marked temperaturedependent variability. The histogram of patulin production versus temperature seen in Figure 1C has a characteristic bell shape. The highest patulin concentrations were attained at 16°C. However, a further increase in temperature to 25°C and 30°C caused a decrease in patulin production. These data matched a previous study of Paster et al. (1995) that compared patulin production on apples kept at various storage temperatures of 0, 3, 6, 17, and 25°C. In this study, the highest patulin concentration was found at 17°C. Our results are also in perfect agreement with those of Baert et al. (2007b) that showed a higher patulin production at low temperatures. However, they contradict many other studies that have reported a stimulation of the patulin production by this fungus by increasing the temperature (McCallum et al. 2002;Salomao et al. 2009). These results proved that temperature plays a role in patulin accumulation but not in a determinant way, other extrinsic and intrinsic factors appear to interact. A comparison of the obtained bell-shaped dependencies of the P. expansum growth rates and patulin levels as a function of temperature ( Fig. 1B and C) revealed that the temperature ranges required to produce patulin were T = 30°C different and more restrictive than those for growth. Similar results have been reported for other fungal species. The Fusarium molds associated with the production of trichothecene (T-2 and HT-2 toxins) have been reported to grow prolifically at temperatures ranging between 25 and 30°C with a low production of mycotoxins. However, high levels of mycotoxins were produced at low temperatures (10 to 15°C), associated with a reduced fungal growth (Nazari et al. 2014). Similarly, the optimal temperature for Fumonisin B1 production was lower than the optimal temperature for the growth of Fusarium verticillioides and Fusarium proliferatum (Marin et al. 1999). Another example of a narrower range of temperatures for toxin production when compared with fungal growth is shown by the accumulation of ochratoxin in barley by Penicillium verrucosum. The growth of this species was conceivable at temperatures fluctuating between 0°C and 31°C, whereas the ochratoxin A production was only detected in the temperature range 12-24°C (Northolt et al. 1979). Effect of water activity The a W of fresh fruits falls in the range 0.97-0.99. Though that patulin was also detected in dried fruits (Karaca and Nas 2006;Katerere et al. 2008) with a W values less than 0.90, analyses were carried out over an a W range 0.85-0.99. The Figure 3A shows the mean diameters of P. expansum, measured at different controlled a W , along culturing time. This species displayed an optimum growth at the highest a W of 0.99, with the shortest lag phase and the most important colony growth (8.3 cm) after incubation period. For this highest value of a W , the fungus recorded the highest growth rate (0.6 cm per day). A drastic decrease in the P. expansum growth rate was observed by lowering the a W from 0.99 to 0.85, using glycerol as humectant (Fig. 3B). The P. expansum isolate displays a different mycelial mass production in the Czapek glucose medium with modified a W . After 14 days, the highest production of fungal dry mass was obtained at the a W of 0.95 (400 ± 50 mg dry weight per 20 mL of medium), followed by 0.99 (201 ± 10 mg) and 0.90 (105 ± 15 mg). A weak mycelium growth (1.1 ± 0 mg fungal dry weight) was reported at the minimal a W tested. In literature, the minimal a W for the germination of this species ranges between 0.83 (Mislivec and Tuite 1970) and 0.85 (Judet-Correia et al. 2010), depending on the strain. As it can be observed, the mycelial dry weight estimated at the 0.95 a W is approximately twice the value found at 0.99 a W. However, the fresh mycelial weight was significantly greater at the highest a W value (data not shown). The patulin production was also significantly affected by the water availability in the medium. No patulin was produced at an a W of 0.85 throughout the incubation period. On the other hand, traces of patulin were detected after 14 days of culture, when the fungus was grown at the two a W values of 0.90 and 0.95. A significant increase in the patulin production by P. expansum was perceived at the a W of 0.99 (Fig. 3C). These findings on the impact of a W on the patulin production by P. expansum are consistent with the two ancient studies reporting that the minimal a W that allows patulin production by this fungus is of 0.95 (Lindroth et al. 1978;Patterson and Damoglou 1986). As previously outlined for the temperature analysis, the a W conditions that promote patulin production were also more restrictive than those allowing growth. Although there were no significant differences in terms of P. expansum growth at both water activities 0.95 and 0.99, the patulin production was significantly stimulated at 0.99, whereas only traces of patulin were detected at 0.95 ( Fig. 3B and C). Effect of pH As several previous studies have reported a decrease in the pH of the medium during P. expansum growth, the Growth rate (cm/day) ** *** *** pH assays were conducted on an overall incubation period of 7 days, in order to reduce pH fluctuations. This pH decrease is due to organic acids (gluconic acid) production, that lower the pH to values in which patulin is more stable (Baert et al. 2007b;Morales et al. 2008;Barad et al. 2013). In our study, the pH of the medium was recorded at the end of the experiment. The initial pH 7 slightly decreased to 6 along 7 days experiment; however, the two pH 2.5 and 4 were maintained constant at the initial value throughout the incubation period. The ability to change the ambient pH in order to generate a more suitable growing environment has been described for other fungal species and was shown to occur in either direction. Some necrotrophic species like Alternaria alternata (Eshel et al. 2002) or Colletotrichum gloeosporioides (Kramer-Haimovich et al. 2006) can alkalize the host tissue by secreting ammonium, whereas other species can acidify the medium by secreting organic acids, like oxalic acid in the case of Botrytis cinerea (Manteau et al. 2003). Under the different pH tested in our study, the lag phase periods were estimated to 1 day of culture from the linear regression curves for colony radius plotted versus time (Fig. 4A). It was also found that the growth rate of P. expansum as a function of pH is bell-shaped with a maximum estimated value of 0.9 cm per day at pH 4 ( Fig. 4B). Regarding, the mycelium dry weights, there were no significant differences between the three tested pH levels. Similarly, Morales et al. (2008) found that the P. expansum growth, estimated in terms of fungal biomass, was unaffected by the fruit juice initial pH. The pH of the medium showed a significant effect on the ability of this fungus to produce patulin. The lowest patulin production was reported at pH 2.5, whereas the highest patulin level was detected at pH 4. The patulinproducing capacity of P. expansum decreased when the pH of the medium increased from 4 to 7 (Fig. 4C). These results are comparable with those presented in previous studies. Damoglou and Campbell (1986) have previously reported that the pH range 2.8-3.2 resulted in less patulin accumulation by P. expansum compared to the pH range 3.4-3.8. While assessing the patulin accumulation in both apple and pear juices at different pH, Morales et al. (2008) have also observed an increase in patulin production by raising the pH from 2.5 to 3.5. The small amounts of patulin found at pH 2.5 are most probably due to a low production rather than to low stability of patulin. In the study of Drusch et al. (2007), the patulin stability was assessed over a wide pH range. Data from this study indicate that patulin is highly stable in the range pH 2.5-5.5. However, a greater decrease in the patulin concentration to 36% of the initial concentration was observed at neutral pH (Drusch et al. 2007). Mathematical modeling of P. expansum growth The growth data modeled in this work comprised the latency phase and the growth curves of P. expansum strain NRRL 3565 at five temperatures, four a W , and three pH. Mycelial extension of colonies against time was almost invariable showing a straight line, after an initial lag period. The growth rates expressed in cm per day were calculated as the slope of the regression curves. The growth rates recorded under the different conditions (data presented above) were used as inputs to calculate the design parameters of equations (3-7). The presented calculation approach has undergone a first mathematical validation, confirming that the differences between the theoretical values predicted by the model and the data obtained under the conditions used to build the model are not significant. The Figure 5 shows the effects of temperature, a W and pH on growth rate (expressed in cm/day) obtained using the approach described in this study. The surfaces generated by the model data summarizes all the growth rate values predicted under combined temperature and a W (at a fixed pH 4), combined pH and temperature (at a fixed a W of 0.99), and combined pH and a W (at a fixed temperature of 25°C). The fixed values are those for which the effect of the other combination of factors on P. expansum growth is visualized the best. The model predicts that the optimal conditions for P. expansum growth were a temperature of 24°C, an a W value of 0.99, and a pH value of 5.1. The predicted growth rate at optimal conditions was 0.92 cm/day. The minimal conditions for P. expansum growth as predicted by the model were a temperature of 3°C, an a W value of 0.85, and a pH value of 2. Under these combined set of conditions, a slowdown of growth to almost zero level is predicted by the model. The experimental validation of the P. expansum growth model was carried out on apples. The objective was to test whether the performance of the predictive model may be low in a realistic situation or not. This validation led to acceptable results under most of the conditions. The growth of the fungus on apples was in general slower than predicted by the model (Fig. 6). However, using the Pearson product-moment correlation coefficient, a value of 0.96 was found between the experimental and predicted growth rate values. The difference between the predicted and observed growth rates on apple is most probably due to the apple itself, which might be a stress factor for the fungus. Such stress factors include the intact tissue structure of the apple, which must be degraded in order to enable mold development to occur and which causes a reduced O 2 availability within the fruit. A similar result Figure 5. Three-dimensional response surfaces showing the expected growth rates (cm/day) determined by the developed model as a function between temperature and a W (A), temperature and pH (B), and pH and a W (C). The graph A correspond to a fixed pH value of 4, the graph B to a fixed a W value of 0.99 and the graph C to a fixed temperature of 25°C. was observed in a previous study investigating the effect of temperature on P. expansum growth in both Apple Puree Agar Medium (APAM) and apples (Baert et al. 2007a). Our results obtained from in vivo experiments indicate that the employment of the developed modeling approach, to assess the combined effect of temperature, a W and pH on the growth responses of P. expansum could be satisfactory. However, the risk in using the described model in real situations may lie in the difference in the initial inoculum size and the unrealistic constant conditions of temperature and moisture content. A review of the literature reveals that certain mathematical models were developed to describe and predict the P. expansum growth under different environmental conditions. The combined effects of temperature and a W on the growth rate of P. expansum were previously studied and modeled by Lahlali et al. (2005) on PDA medium. In their study, the data obtained with both sorbitol and glycerol as humectant were modeled by means of the quadratic polynomial model. In agreement with our findings, it was shown that P. expansum grows best at temperatures ranging from 15 to 25°C and at an a w ranging from 0.960 to 0.980. The growth rate and the lag time for six P. expansum strains were modeled as a function of temperature by Baert et al. (2007a) on APAM. In accordance with our results, the optimal temperature for growth varied between 24°C and 27°C depending on the strain. A similar modeling study was later conducted on both malt extract agar (with a pH value of 4.2 and an a W of 0.997) and on simulating yogurt medium (Gougouli and Koutsoumanis 2010), where a Cardinal Model with Inflection (CMI) was used. The optimal temperature for P. expansum growth was determined as 22.08°C, which was close to that predicted by our model. Moreover, the predicted growth rate (0.221 mm/h, the equivalent of 0.55 cm/day), was lower than that expected in our study. Another predictive study was conducted by Judet-Correia et al. (2010) using the Cardinal Model with Inflection. The objective of the latter was to develop and validate a model for predicting the combined effect of temperature and a W on the radial growth rate of P. expansum on PDA medium. The optimal conditions estimated by this study on Potato Dextrose were 23.9°C for temperature and 0.981 for a W . These estimated values are close to those predicted in the present work. However, the optimal growth rate expected was remarkably lower. This difficulty in comparing the growth rates is obviously due to the fact that the isolates in the study of Judet-Correia et al. (2010) were grown on PDA, whose composition differs from that of Czapek glucose agar. The importance of this study resides in the fact that it takes into account the three key growth factors (Temperature, a W and pH) unlike the previously conducted studies that did not consider more than two exogenous factors. It is also worth mentioning that the effect of the latter factor on P. expansum growth has never been modeled before. In addition to its growth modeling approach, this study considers the distinction between the favorable conditions for growth and those for toxigenesis of P. expansum. It remains to note that this predictive model is built up based on the data on one P. expansum strain (NRRL 35695). As previously reported by McCallum et al. (2002), P. expansum isolates exhibit different growth rates. In this regard, it will be interesting to evaluate the ability of this model to extrapolate to other strains within the same species. Ultimately, the extent to which the model can be applied to other inoculum sizes, other growth media and fluctuating temperatures is to be determined in future validation studies for extrapolation. Conclusion The findings in this study provide a considerable insight and a very interesting and informative comparison of the growth rate and patulin production of P. expansum regarding three eco-physiological factors mainly involved in the proliferation of pathogenic fungi. In the present work, a predictive model was also developed as a tool to be used for the interpretation of P. expansum growth rate data. Within the experimental limits of temperature, pH and a W , this model was able to predict the colony radial growth rates (cm/day) along a wide combination of culture conditions. Furthermore, the validation showed that the model can predict the growth of P. expansum under natural conditions on apples, with an acceptable accuracy. To conclude, the developed mathematical model for predicting the P. expansum growth on a laboratory scale can be used as a tool to assess the risk of P. expansum in fruit juices industry by predicting conditions over which the P. expansum growth in food matrices might be a problem.
v3-fos-license
2018-12-02T16:55:10.364Z
2018-11-27T00:00:00.000
53757036
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "BRONZE", "oa_url": "https://www.jmcp.org/doi/pdf/10.18553/jmcp.2018.24.12.1230", "pdf_hash": "5800f32e5670aae9ef624246919e67809abbe7aa", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:40", "s2fieldsofstudy": [ "Medicine" ], "sha1": "57cb0d23567b6f18e50cb0309017d1bdd37ea578", "year": 2018 }
pes2o/s2orc
Coverage of Novel Therapeutic Agents by Medicare Prescription Drug Plans Following FDA Approval BACKGROUND: Regulatory approval of novel therapies by the FDA does not guarantee insurance coverage requisite for most clinical use. In the United States, the largest health insurance payer is the Centers for Medicare & Medicaid Services (CMS), which provides Part D prescription drug benefits to over 43 million Americans. While the FDA and CMS have implemented policies to improve the availability of novel therapies to patients, the time required to secure Medicare prescription drug benefit coverage—and accompanying restrictions—has not been previously described. OBJECTIVE: To characterize Medicare prescription drug plan coverage of novel therapeutic agents approved by the FDA between 2006 and 2012. METHODS: This is a cross-sectional study of drug coverage using Medicare Part D prescription drug benefit plan data from 2007 to 2015. Drug coverage was defined as inclusion of a drug on a plan formulary, evaluated at 1 and 3 years after FDA approval. For covered drugs, coverage was categorized as unrestrictive or restrictive, which was defined as requiring step therapy or prior authorization. Median coverage was estimated at 1 and 3 years after FDA approval, overall, and compared with a number of drug characteristics, including year of approval, CMS-protected class status, biologics versus small molecules, therapeutic area, orphan drug status, FDA priority review, and FDA-accelerated approval. RESULTS: Among 144 novel therapeutic agents approved by the FDA between 2006 and 2012, 14% (20 of 144) were biologics; 40% (57 of 144) were included in a CMS-protected class; 31% (45 of 144) were approved under an orphan drug designation; 42% (60 of 144) received priority review; and 11% (16 of 144) received accelerated approval. The proportion of novel therapeutics covered by at least 1 Medicare prescription drug plan was 90% (129 of 144) and 97% (140 of 144) at 1 year and 3 years after approval, respectively. At 3 years after approval, 28% (40 of 144) of novel therapeutics were covered by all plans. Novel therapeutic agents were covered by a median of 61% (interquartile range [IQR] = 39%-90%) of plans at 1 year and 79% (IQR = 57%-100%) at 3 years (P < 0.001). When novel therapeutics were covered, many plans restricted coverage through prior authorization or step therapy requirements. The median proportion of unrestrictive coverage was 29% (IQR = 13%-54%) at 3 years. Several drug characteristics, including therapeutic area, FDA priority review, FDA-accelerated approval, and CMS-protected drug class, were associated with higher rates of coverage, whereas year of approval, drug type, and orphan drug status were not. CONCLUSIONS: Most Medicare prescription drug plans covered the majority of novel therapeutics in the year following FDA approval, although access was often restricted through prior authorization or step therapy and was dependent on plan choice. very year dozens of novel therapeutic agents-small molecule drugs and biologics-enter the health care marketplace following regulatory agency approval. 1 Regulatory approval, however, ensures neither payer reimbursement nor patient access. Regulatory agencies, such as the U.S. Food and Drug Administration (FDA), are tasked with determining whether medical products are safe and effective for public use. Insurers, however, must decide which medical products and services are "reasonable and necessary," which is the standard used in determining coverage. 2,3 As a result, in the United States, patient access to new medications following FDA approval is effectively determined by a complex system of public and private payers. In the United States, the largest payer is a government agency, the Centers for Medicare & Medicaid Services (CMS), which provides coverage for older adults through the Medicare program and for socially vulnerable children and families through the Medicaid program. Medicare is the larger of the 2 programs, with services covering roughly 1 in 6 (56.8 million) Americans and providing prescription drug benefits to • Regulatory approval of novel therapies by the FDA does not guarantee insurance coverage requisite for most clinical use. • There has been no systematic examination of the timing of or restrictions on insurer coverage of novel therapeutic agents following their approval by the FDA. • Medicare prescription drug benefit plans, which cover more than 40 million Americans, can provide insight into coverage trends. What is already known about this subject • While 90% of novel therapeutic agents were covered by at least 1 plan in the year after FDA approval, coverage patterns were heterogeneous and often used prior authorization or step therapy restrictions. • The median proportion of plans providing unrestrictive coverage was 29% at 3 years, and few therapeutics (4%) were covered by all plans without restrictions at 3 years. • CMS-protected drug status, FDA priority review, FDA-accelerated approval, and therapeutic area were each associated with higher rates of coverage, whereas year of approval, drug type, and orphan drug status were not. between 2006 and 2012. Following a previous approach, 20 we excluded reformulations of drugs, combination therapies, nontherapeutic agents (e.g., imaging contrast), and subsequent approvals of rebranded drugs for new indications. We also excluded drugs used exclusively for those indications outside the purview of Part D coverage (e.g., pediatric indication, over the counter, and weight loss) and those withdrawn from market less than 3 years after approval. Drug names were linked to their National Drug Code (NDC) numbers, and unmatched compounds were excluded. All drugs for which no coverage by any formulary was observed over the study period were reviewed to ensure that it was reasonable to expect Part D prescription drug coverage. Physician-administered drugs typically covered by Medicare Part A or Part B were excluded, 9 including drugs approved for in-hospital use only (alvimopan, used to improve postoperative bowel healing, and clevidipine for intravenous blood pressure control); intravitreal injections (ranibizumab, aflibercept, and ocriplasmin); procedural agents (fospropofol, collagenase, and polidocanol); and infusions administered in the outpatient setting, such as chemotherapy agents (brentuximab and carfilzomib). We determined the following characteristics for all novel therapeutics at the time of original approval by reviewing the relevant documentation in the Drugs@FDA database 1 : drug type (small molecule drug or biologic), orphan drug designation, FDA priority review designation, and FDA-accelerated approval status. In addition, we determined broad therapeutic area according to World Health Organization Anatomical Therapeutic Chemical codes, as used in a previously published work. 20 Categories were autoimmune, cancer, cardiovascular/diabetes/ lipids, dermatologic, infectious disease, neurologic, psychiatric, and other. Finally, we determined whether the drugs reasonably fell into a CMS-protected drug class, which are immunosuppressants for transplant organ rejection prophylaxis, antidepressants, antipsychotics, anticonvulsants, antiretrovirals for human immunodeficiency virus (HIV), and antineoplastics. 9 CMS Formulary Coverage We determined Medicare prescription drug benefit formulary coverage using the CMS Prescription Drug Plan Formulary, Pharmacy Network, and Pricing Information Files. 21 These files include data on formularies, including Medicare Part D standalone prescription drug plans and Medicare Advantage (Part C prescription drug plans). We obtained data from 2007 to 2015 (Quarter 2), and we linked each formulary to its plan (~3,000 plans). We excluded plans lacking formulary data and special needs plans (~600 plans annually) because they do not reflect general drug availability. Outcome Measures Our primary outcome was drug coverage, defined as inclusion of a drug on a plan formulary. No coverage was defined 43.2 million in 2016. 4 Medical services and physician-administered drugs are generally covered under Parts A and B, while prescription drug benefits are generally covered under Part D plans. Medicare Advantage, or Part C plans, cover all of these same services under health maintenance organization arrangements. CMS occupies a unique role in the American health care system because its coverage decisions may alter clinical practice, influence national coverage trends, and inform public debate. 5 By law, however, Medicare is prohibited from negotiating directly with pharmaceutical manufacturers. 6 Medicare prescription drug plan benefits are contracted to and sold by private insurers, who control coverage and benefits and negotiate pharmaceutical rebates, theoretically reducing the cost to consumers. This arrangement means that CMS does not always directly determine patient access to individual prescription medications. The FDA and CMS have implemented policies to support more rapid and reliable availability of novel therapeutics to patients. The FDA, for example, offers orphan drug designations, priority review, and accelerated approval, 7,8 while CMS mandates review timelines and has designated 6 protected drug classes in which plans must cover "all or substantially all medications." 9 The evolving role of these programs in facilitating patient access to novel therapeutics has been the subject of continuing analysis and proposed change. [10][11][12][13] Related to these policies, previous research has shown that there may be significant delays between FDA approval of new technologies and CMS coverage, which then delays clinical adoption. [14][15][16] In response, efforts have been made to align the FDA and CMS review schedules by developing a parallel review process for medical technologies seeking coverage under Medicare Parts A and B-which was made permanent in 2016 17,18 -to ensure that CMS issues a national coverage determination shortly after FDA approval. 19 However, no such program currently exists for prescription drug coverage. To date, there has been no study to determine if there are CMS coverage delays for newly FDA-approved prescription drugs. To quantify the timing of CMS coverage, we used Medicare prescription drug benefit formulary data from 2007 through 2015 to characterize rates of plan formulary coverage of therapeutic agents approved by the FDA between 2006 and 2012. In this study, we have analyzed the time between FDA approval and CMS coverage, differentiating between restrictive and unrestrictive coverage, as well as stratifying analyses based on therapeutic characteristics, such as CMS-protected drug class status, and regulatory characteristics, such as FDA priority review and accelerated approval status. These results provide unique insights into how federal policies influence coverage of novel therapeutic agents. ■■ Methods FDA Drug Approvals We identified all novel therapeutic agents approved by New Drug Application (NDA) or Biologics License Application (BLA) as a plan formulary on which the drug name did not appear, normally requiring a patient to assume full responsibility for the drug costs, except in rare circumstances (such as appeal or grand-fathered previous coverage). As a secondary outcome, we categorized coverage as restrictive or unrestrictive, defining restrictive coverage as plans requiring step therapy or prior authorization. At the plan level, we determined the percentage of plans covering each included drug in each year following approval, stratified by unrestrictive and restrictive coverage. Drug coverage was estimated at year 1 and year 3 following FDA approval. Data Analysis For each novel therapeutic agent, we used descriptive statistics to determine the percentage of plans providing coverage at years 1 and 3 following FDA approval. We then determined the percentage of plans achieving coverage at these predefined thresholds: any plan coverage, 50% of plans providing coverage, 90% of plans providing coverage, and 100% of plans providing coverage. The median coverage percentage was determined at each time point. Nonparametric paired tests (Wilcoxon signed-rank test) were used to compare median plan coverage 1 year and 3 years after FDA approval. At a plan level, we compared median coverage at 1 and 3 years after FDA approval by subgroups using either Mann-Whitney and Kruskal-Wallis tests as appropriate: year of approval, approval pathway (NDA vs. BLA), protected drug class, orphan drug designation, priority review status, accelerated approval, and therapeutic area. P values are reported without correction for multiple comparisons. Statistical analyses were conducted using GraphPad Prism (GraphPad Software, La Jolla, CA) and Microsoft Excel (Microsoft, Redmond, WA). Institutional review board approval was not required for any portion of this study, since it did not include human subject research. ■■ Results Novel Therapeutic Sample and Medicare Prescription Drug Plans There were 180 new small molecule drugs and biologics approved between 2006 and 2012 ( Figure 1). We initially excluded 22 drugs based on the following reasons: (a) 9 contrast or in-hospital diagnostic agents, (b) 2 agents were previously approved for alternate indications, (c) 3 agents were approved for an indication outside the purview of Part D coverage, (d) 3 agents were withdrawn from market less than 3 years following approval, and (e) 5 agents were excluded due to inability to confirm an NDC match (see Appendix A, available in online article). Thus, our initial analysis consisted of 158 newly FDA-approved therapeutics. Subsequently, an additional 14 novel therapeutic agents (see Appendix B, available in online article) were excluded from the study because they had no expectation of Part D prescription drug coverage, based on (a) no formulary coverage, (b) no plausible home administration, and (c) reasonable expectation of coverage by Medicare Part A or B. The final analysis therefore consisted of 144 novel therapeutic agents. The number of FDA approvals included in the final sample varied by year, ranging from a low of 14 in 2007 to a high of Excluding special needs plans and plans without corresponding formulary data, there was a range of Medicare prescription drug plans from a high of 3,095 in 2008 to a low of 2,538 in 2011. Novel Therapeutic Coverage Coverage of novel therapeutic agents tended to increase over time ( Figure 2). The proportion of novel therapeutics covered by at least 1 Medicare plan increased from 90% (129 of 144) to 97% (140 of 144) at 1 and 3 years following approval, respectively. The proportion of novel therapeutics covered by the majority (≥ 50%) of plans increased from 65% (93 of 144) to 78% (112 of 144) at 1 year and 3 years following approval, respectively. The proportion of novel therapeutics covered by greater than 90% of plans increased from 25% (36 of 144) to 35% (51 of 144) of therapies at 1 year and 3 years following approval, respectively. The proportion of novel therapeutics covered by all plans increased from 15% (22 of 144) to 28% (40 of 144) at 1 and 3 years following approval, respectively. Of those with coverage by all plans at 3 years following approval, 90% (36 of 40) were in a CMS-protected drug class, and 15% (6 of 40) had no coverage restrictions. Therapeutics with universal coverage without restriction were limited to antiretroviral HIV medications-darunavir, maraviroc, raltegravir, etravirine, rilpivarine, and the combination pill elvitegravir/cobicistat/emtricitabine/tenofovir. Among all newly approved therapeutics, the median proportion of plans with any coverage at year 1 was 61% (interquartile range [IQR] = 9%-90%) and 79% (IQR = 57%-100%) at year 3 ( Figure 3). The median proportion of plans with unrestrictive coverage was 21% (IQR = 9%-39%) at year 1 and 29% (IQR = 13%-54%) at year 3. There was a statistically significant difference between all pair-wise comparisons of total and unrestrictive coverage at year 1 and year 3 (P < 0.001). Four drugs (3%) had no coverage 3 years following approval. Two of the drugs, abobotulinumtoxina (injection muscle relaxant) and dienogest estradiol valerate (Natazia, an oral contraceptive therapy) received coverage after 4 years. One drug, glucarpidase (used to treat methotrexate toxicity), was initially covered but lost coverage, while spinosad (used as an antilice treatment) had no coverage at any time. Characteristics Associated with Novel Therapeutic Coverage There were significant differences in coverage observed when comparing novel therapeutic characteristics (Table 1). Differences in coverage rates were associated with protected class status, priority review status, accelerated approval, and therapeutic area but not with orphan drug status or biologics versus small molecules. For instance, among therapeutics included within a CMS-protected class, the median proportion of plans providing any coverage at 1 year following approval Years Following FDA Approval Covered by at least 1 plan (> 0% of plans) Covered by majority of plans (≥ 50% of plans) Covered by nearly all plans (≥ 90% of plans) ■■ Discussion In this study, we used Medicare prescription drug benefit plan data to characterize rates of formulary coverage for novel therapeutic agents following FDA approval over a 7-year period. While most Medicare plans covered the majority of novel therapeutics approved by the FDA in the year following approval, drug coverage was heterogeneous, with potentially significant delays in access to some novel medications. Moreover, restrictions such as prior authorization and step therapy requirements were commonly used. Finally, multiple novel therapeutic characteristics-including CMS-protected status, FDA priority review and accelerated approval, and therapeutic area-were associated with higher rates of coverage. Most Medicare plans covered the majority of novel therapeutics approved by the FDA after 1 year, but even more plans covered novel therapeutics by 3 years after approval. While CMS mandates review of new approvals within 180 days, 9 it is clear that FDA drug approval does not guarantee insurer coverage, even after several years. This heterogeneous pattern has been described previously for medical technologies-procedures, devices, and drugs-covered by Medicare Parts A and B. In these cases, coverage may be determined by local private contractors by local coverage determinations, 22,23 or CMS may issue national coverage determinations. 15 This process leads to unpredictability for technology manufacturers and consumers. Recently, the FDA and CMS have instituted a parallel review process to coordinate Medicare Parts A and B coverage and FDA approval decisions for medical devices. 24 As of 2017, 2 devices have successfully navigated the program, and there continue to be efforts to optimize program use. 17,25 While insurer coverage is not required for clinical use, lack of coverage may be a significant barrier to widespread adoption. It may be worth considering the expansion of this parallel review program to include novel therapeutic agents that are expected to be covered as part of Medicare's prescription drug benefit. In this study, FDA priority review designation-most often reserved for drugs thought to be a "significant improvement" over existing therapy 26 -was associated with increased rates of insurer coverage. An association between coverage and FDAaccelerated approval-in which the FDA grants approval based on surrogate endpoints to speed drugs to market-was similarly noted. Because these pathways are intended for therapeutics of greater clinical potential importance, in some respects, our findings are reassuring. However, controversy over the accelerated approval pathway exists because it may require insurers to consider coverage of expensive medications before definitive evidence of clinical efficacy has been generated. 11 In and raised costs as a major impetus for the proposed changes. While the economic effect of such changes is beyond the scope of this study, the results suggest, not unexpectedly, that CMSprotected status is associated with higher coverage of novel therapeutics. A large proportion of covered novel therapeutic agents were covered with restrictions, including prior authorization or step therapy requirements. Only 4% of drugs had universal unrestrictive coverage 3 years following approval, and these drugs were limited to antiretroviral HIV therapies. These utilization management strategies can influence patterns of therapeutic usage to control costs and incentivize use of safer, more effective medications. 33 CMS provides broad latitude for such strategies according to "existing best practices." 9 These findings are consistent with a 2009 study of top-selling biologics, which showed that plans often use prior authorization and other costcontrol measures. 34 The common use of either step therapy or prior authorization suggests that plans are attempting to ensure that patients fulfill certain criteria or try alternatives before receiving novel therapeutics. Given the broad and sophisticated addition, there are concerns over whether evidence of clinical efficacy is being routinely generated following surrogate endpoint-based approval. [27][28][29] Given the trend towards increased insurer coverage of accelerated approvals, therapeutics in this pathway may benefit from use of coverage with evidence development requirements, ensuring that adequate evidence is generated after FDA approval to inform clinical decision making. Likewise, drugs within CMS-protected classes achieved coverage at rates exceeding 90%, compared with 50% among nonprotected drug classes at 1 year after approval. This result is consistent with previous studies that showed high coverage rates of cancer therapeutics and psychotropics (including protected classes of anticonvulsants, antidepressants, and antipsychotics). 30,31 Recently, there have been a number of proposals to reduce the number of protected classes, including a 2014 CMS proposal to eliminate protections for antidepressants, immunosuppressants, and eventually antipsychotic therapies. 13 The proposal-which was ultimately withdrawn because of concerns regarding vulnerable populations 32 -cited anticompetitive insurer practices that limited effective choice Subgroup Analysis of Medicare Prescription Drug Coverage 1 Year Versus 3 Years After FDA Approval these requirements in greater detail to determine if they are aligned with the letter and spirit of existing regulations. Finally, there may be utility in creating a case-by-case mechanism to guarantee evaluation for coverage of novel therapeutics, a role that an expanded FDA-CMS parallel review process could fill. 44,45 Parallel review is currently limited to medical devices seeking an expedited CMS national coverage determination for Medicare Part A or B. Expanding this program to include prescription drugs, provided that sufficient resources are made available to support such an effort, might not only streamline coverage determinations for prescription drugs, but also could incentivize evidence generation relevant to the Medicare beneficiary population, which is generally older, more often female, and affected by more comorbidities than the general population. 46 Limitations Our analysis has several limitations to consider. First, novel therapeutic agents represent a diverse set of medical technologies, and our attempts to group drugs may oversimplify complex approval pathways and use patterns. While coverage was evaluated at multiple time points and stratified by year of approval, we could not control for all temporal trends, such as changes in FDA, CMS, or commercial insurer policies. In addition, data are presented as proportions of the annual number of prescription drug plans but may not necessarily account for nonrandom variability in plan enrollment or benefit structure. Second, we have defined restrictive coverage to exclude formulary tier, which determines patient cost sharing, because formularies have expanded the number of tiers from 4 to 5 or even 6 over the past decade. Therefore, tiering could not be accurately compared across the time period of our study. 47 Third, this analysis may not be generalizable to non-Medicare prescription drug plans. Medicare, however, provides coverage for the largest number of beneficiaries in the United States, and its coverage policies have important implications for pharmaceutical and biologic manufacturers. Finally, our analysis was limited to the use of Medicare formulary files, which may not include records for all plans, formularies, or novel therapeutic agents due to inaccuracies or delays in data reporting. However, Medicare records are well maintained, and our exclusion criteria were formulated to maintain external validity. ■■ Conclusions This study characterizes rates of Medicare prescription drug benefit plan coverage for novel therapeutic agents. The analysis underscores that, given the heterogeneity of plan coverage, patient access to medication is largely dependent on plan choice. Most Medicare plans covered the majority of novel therapeutics in the year following FDA approval, although use of coverage restrictions, this may be an important area for regulatory review in the future. Our analysis showed significant coverage variability for novel therapeutic agents, suggesting that patient access to new drugs is highly dependent on plan choice. While 90% of drugs in the sample were covered by at least 1 plan in the year after approval, few drugs were covered by every plan. By design, Medicare prescription drug plans incorporate free market features that ideally allow patients to choose the plan that best addresses their health care needs. 35 However, in reality numerous and complex plan choices, limited by geography and existing medication needs, may complicate decision making, 36 and studies have shown that patients have trouble predicting their health care use patterns for the coming year. 37 Given that the drugs included in our study are new to clinical practice, it is unlikely that patients could predict need for these medications and plan accordingly. Some have suggested developing a standardized national formulary, 38 a process that could offer significant advantages to patient access but would come at the expense of choice. Policy Recommendations Overall, our findings suggest a number of policy implications for drug makers, regulators, insurers, clinicians, and patients. First, while the majority of novel therapeutic agents are covered by Medicare prescription drug plans by the year following FDA approval, access to novel therapeutics appears to be largely dependent on plan choice. However, patients have limited ability to anticipate need for novel therapies, since these medications are new to the market. Moreover, many patients select neither the cheapest plans that fulfill their needs nor reassess their insurance plans following initial selection despite a number of tools that compare plans (such as Medicare Plan Finders). 39,40 Several approaches have been suggested to improve patient plan selection, from changing the user interface of existing systems to creating novel online decision support tools. 41,42 Reduction in the number of plan choices has also been shown to decrease cognitive overload and improve optimal plan choice. 36,39,43 Regardless of broader changes to the insurance marketplace, our study underscores the importance of policies that promote annual reevaluation of plan choice as coverage evolves and new drugs are approved by the FDA and are covered, or not, by prescription drug plans. Second, despite CMS-protected drug class designation, a number of insurers may still restrict drug access by using formulary utilization management tools. CMS currently places few limits on utilization management, mandating use of "existing best practices," which creates an imprecise framework that could affect patient access. 9 Because we found that coverage of novel therapeutics often occurs with step therapy or prior authorization requirements, future studies should evaluate
v3-fos-license
2022-11-10T16:31:14.933Z
2022-11-07T00:00:00.000
253433539
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11661-022-06849-1.pdf", "pdf_hash": "de18ddf9612a13e025d0a898dcfd9c5d9d928edf", "pdf_src": "Springer", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:41", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "6458af1c71261fe90e9629236f671b427c3d0977", "year": 2022 }
pes2o/s2orc
Mechanical and Microstructural Analysis of Friction Surfaced Aluminum Coatings on Silicon Nitride Ceramic Substrates The lack of suitable techniques for joining Si3N4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{3}{\text{N}}_{4}$$\end{document} ceramics with metals has limited the usage of this otherwise outstanding material for composite applications. In this study, aluminum AlMgSi0.5 (EN AW-6060) was coated onto silicon nitride Si3N4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{3}{\text{N}}_{4}$$\end{document} ceramic substrates using friction surfacing technology. Experimental work revealed that the harmful effects of thermal shock (e.g., substrate cracking, coating delamination) observed with other material combinations can be avoided by selecting materials with a low coefficient of thermal expansion, low Young’s modulus and high thermal conductivity. Design of experiments derived models for coating thickness and bonding strength fit the data well (i.e., the regression model accounts for most of the variation in the response variable). Whereas the coating thickness is predominately dependent on the rotational speed used, the bonding strength is also affected by the traverse speed. Coating thicknesses upto 2.03 mm and bonding strengths of 42.5 MPa were achieved. Deposition rates exceed those of physical vapor deposition by a magnitude of ×1000 and bonding strength is on-par with thin-film metallization. Scanning transmission electron microscope analysis revealed formation of a glassy phase at the interface. Using energy-dispersive X-ray spectroscopy analysis high silicon and oxygen content with smaller percentages of aluminum and nitrogen were detected. High-resolution transmission electron microscope imaging revealed no distinct lattice structure leading to the assumption that the composition is predominantly amorphous and consists of SiAlON. I. INTRODUCTION THE continuous development of materials plays a very important role in industry. Increasingly, materials are used at their physical and mechanical limits and must be constantly developed to allow for increasingly demanding requirements. Two such requirements are tribological and thermal; with ceramic materials often employed when these factors are critical, gaining immense importance particularly over the last decades. [1,2] For centuries ceramic materials have been an integral part of everyday life and advances in material science have led to the development of new ceramics which, in 2018, accounted for a world-wide turnover of more than $229 billion. [3] According to the ceramic industry, one of the biggest sectors is formed by advanced ceramics [2] which are defined as those with highly engineered and precisely specified attributes. [4] They are used in many sectors including automotive, medical and the electronics industry. [5] Components with locally differentiated material properties are becoming increasingly important, as they allow for specific adaptations that are tailored towards an application. Most of these components need to be fixed in a specific location or combined with other parts; but joining these dissimilar materials (i.e., ceramics with metals) for further use in assemblies is challenging. Due to the poor wettability of ceramics by metals it is difficult and technologically complex to join these materials. Low-cost casting processes result in poor bonding at the interface between metal and ceramic parts, and technologies currently used are complex and relatively costly. A modern solution to these problems is to use the technique of friction surfacing to apply a thick metal coating onto the surface, which in turn can be used as a base for further processing. Earlier work by the authors [6] showed that temperatures up to 580°C can be reached during the coating process. Despite preheating the substrates, to reduce the temperature difference, micro-cracking in the substrate caused by the thermal shock was observed. The research described in this current paper investigates the use of alternative substrate materials to overcome these problems. By way of comparison Figure 1 shows the complexity of two different metallizing techniques, (a) molybdenum-manganese metallization (Mo-Mn) and (b) thinfilm metallization, along with (c) friction surfacing. The Mo-Mn method was developed nearly a century ago and is a mature process. [8] A ceramic substrate is first coated with a Mo-Mn paste, fired in wet hydrogen at 1450°C and then plated with Nickel (Figure 1(a)). At these high temperatures a MnAl 2 O 4 spinel is formed and the glassy phases in the Al 2 O 3 substrate migrate into the pores of the metallizing layer by capillary forces, forming a strong bond by producing anchors. [9,10] By modifying this technique (e.g., paste composition, firing temperature) different ceramics can be metallized. Using a paste consisting of Ag, Pd, inorganic filler and glass Wenzel et al. [11] metallized Si 3 N 4 ceramics achieving bonding strengths up to 23 MPa. Figure 1(b) shows a second commonly used metallizing process. Using physical vapor deposition (PVD) different types of metal layers can be deposited onto ceramic substrates. The metal is evaporated by heating above the gas transition temperature or sputtered by means of a process gas containing ionized particles and condensates at the surface of the substrate. Bonding mechanisms can range from mechanical interlocking to chemical bonding. [12][13][14] This process can also be used to apply a thin aluminum coating onto Si 3 N 4 substrates as reported by Brener et al. [15] The authors report that chemical reactions occur at the interface forming AlN layers which also increase in thickness after heat treatment. As stated by Walker and Hodges [16] these techniques (Figures 1(a), (b)) are well developed and have been used for decades. But the need for high temperature furnaces and plating methods make them expensive; with the process itself time consuming, and not suited for high deposition rates and low quantities. Friction surfacing of ceramics is a one step process (Figure 1(c)), provides similar strength to thin-film metallizations (e.g., PVD) [16] and does not need a furnace or atmospheric control. Friction surfacing is also suited for low quantity production and coating rates exceed those of physical vapor deposition techniques by a 91000 magnitude. Thick-film metallization is a necessity for further connection of ceramic components to other assemblies by using common welding processes, and which is not possible by PVD. Successful application of this technique to coat Al 2 O 3 ceramics with aluminum (metallizing) has been reported by Atil et al. [6] in 2020. This paper addresses the detrimental effects of thermal shock (e.g., substrate cracking) observed in the previous work and gives an overview of the selection criteria to choose the appropriate material for the process. In this study, a thick-film aluminum coating (i.e., AlMgSi0.5 / EN AW-6060) was successfully applied onto a silicon nitride Si 3 N 4 ceramic substrates by using friction surfacing technology. This opens new paths for use of this material in composite applications with bonding strength and thickness exceeding what is possible with thick-film-paste metallizing technologies. [2,11] The uneven temperature distribution in the substrate, induced by the hot coating material applied onto the surface during the coating process, leads to different (a) (b) (c) Fig. 1-(a) Molybdenum-manganese metallization, [7] (b) thin-film metallization [7] in comparison to (c) friction surfacing. thermal expansions. This inconsistency induces stresses in the substrate material. [17] The ability of a material to withstand abrupt thermal changes is referred to as thermal shock resistance. The thermal shock resistance can be quantified by the coefficient of thermal expansion (a), the Young's modulus (E), the Poisson's ratio (m) and the prevailing temperature difference (DT) between the substrate and coating material during the process. The following equation can be used to calculate the stress (r t ) [18] : If this value exceeds the mechanical strength (r) of the substrate, failure occurs. Thus, for the material to remain failure free: Combining Eqs. [1] and [2] a value for DT max (i.e., maximum temperature difference the material can withstand without failure) can be derived, which is equivalent to the first thermal stress resistance parameter R (also termed the thermal shock parameter of first type [19] ) as suggested by Hasselman [20] : Taking the dissimilar thermal conductivities (k) of ceramics into account, the second thermal stress resistance parameter R¢ (the thermal shock parameter of second type [19] ) should also be considered for a direct comparison of materials [18] : As the thermal shock resistance can be described by R and R¢, these material properties have been calculated for different ceramic materials and used for preselecting a suitable material combination. Table I shows specific material properties of various ceramics and their corresponding thermal shock resistance values R and R¢. Looking for the highest value for R¢, silicon nitride (Si 3 N 4 ) surpasses all listed ceramics. Boron carbide (B 4 C) and Silicon carbide (SiC) also present themselves as good candidates so their material properties should be considered further. Thermal analysis previously reported [6] revealed that temperatures up to 580°C can be reached when friction surfacing aluminum oxide with aluminum. Kılıcarslan et al. [23] found that boron carbide starts to oxidize at 500°C whereby a protective layer of boron oxide is formed; however, this shielding effect is not enhanced at elevated temperatures [24] which leads to a deterioration of the mechanical properties. This fact and the relatively low thermal shock resistance compared to silicon nitride led to the exclusion of this, otherwise seemingly appropriate, material from further experimental analysis. Following the same procedure for silicon carbide reveals that it is also prone to oxidation, but at higher temperatures. It forms a layer of silicon oxide which acts as a protection against further oxidation. This layer is effective up to temperatures of 1723°C. [25] Due to the high thermal conductivity of SiC, the thermal shock parameter R¢ is only second to Si 3 N 4 ; but the thermal shock parameter of first type is not on par with Si 3 N 4 . Silicon nitride has the highest thermal shock resistance of all the technical ceramics displayed and it additionally sustains its strength at elevated temperatures. [26] It is used in high temperature applications such as gas turbine engines. [27] Its thermal stability at elevated temperatures, high thermal shock resistance and superior mechanical properties identifies Si 3 N 4 as an excellent candidate for further research. As a result, it has been chosen as a substrate material. The selection of a suitable coating material also strongly influences the outcome of the experiments. Not only has the coating material to be ductile and weldable, but the alloy elements should also be able to create a chemical bond with the substrate. Aluminum-magnesium-silicon alloys match the requirements, are affordable and the EN AW-6xxx group of alloys have high ductility and strength. The high silicon content improves the weldability and may increase the bonding strength by forming a chemical bond. The Magnesium content has the effect of increasing the strength by inducing age hardening [28] and additionally has the potential to increase the bonding strength by forming Mg 2 Si and AlN compounds in the interface. [29] For the reasons stated above AlMgSi0.5 (EN AW-6060) has been chosen as the coating material. A. Experimental Setup and Method For the experiments the ceramic specimens were friction surfaced using a modified milling machine (DMG Mori Co., Germany, Model: Maho MH700). This is the same set-up as used in previous work and Figure 2 shows the converted device. A high-frequency spindle drive was mounted on a carriage and, to keep the axial force constant during the surfacing process, a pneumatic cylinder was mounted between the spindle and machine head. Also, a bespoke clamping device was redesigned and manufactured (i.e., adding force adjustment functionality and further enhancement on the clamping load distribution) to clamp the ceramic substrates, as shown in Figure 3. The clamping device was able to accommodate various different material sizes and shapes. Start and end-plates were milled to conform to the outer contour of the substrate, which also acted as a means of preventing specimen rotation. The clamping force can be adjusted with a screw on the clamping unit and can provide a load of up to 2.5 kN. Slots to allow for installation of heating cartridges were incorporated into the base, but were not necessary for the material combination used in these experiments. B. Material Preparation For these experiments, aluminum alloy rods (EN AW-6060) and ceramic plates (Si 3 N 4 ) were cut into the required shape. The plates, as delivered, had a dimension of 50 9 50 mm; but due to the limited number of Si 3 N 4 samples the specimens were cut into four equal rectangular parts of approximately 25 9 25 mm. The coating rod was initially pressed against the start-plate and then traversed over the substrate after forming a flash. This approach was used because of the small sample dimensions which dictated that a homogeneous coating should be produced across the substrate, from one side to the other. Also with this method the coating rod does not require the preparation of drilling down its rotational axis so creating a thickwalled tube, as was the case in previous work. [6] This was previously done to alleviate the local stress peaks which were induced by the rod tip at the first stage of the coating process (i.e., pressing the rod onto the substrate surface) which led to surface cracks. According to Liu et al. [30] the generated heat at the friction zone is transferred by close contact melting towards the center of the rod forming a consistent quasi-liquid-layer between the rod and the surface of the substrate. Figure 4 shows the dimensions of the materials used. The end surfaces of the aluminum rods were deburred and cleaned with isopropanol. The ceramic substrates were lapped to remove the sinter skin and produce a plane-parallel surface. All specimens were degreased before use. C. Data Acquisition Factors such as coating temperature and axial force can influence the quality of the bond [6] and these must be recorded accurately during the process. The axial force was measured with a load cell attached to the spindle, whereas the coating temperature was measured with Type K thermocouples. To get an accurate temperature reading of the coating process at the interface, the thermocouple tip had to be placed flush with the top surface of the substrate. To facilitate this, specimens were cut into two pieces incorporating a slot to hold the thermocouple. Because of the high thermal shock resistance of Si 3 N 4 substrate, unlike the previously reported work with alumina, preheating was not used. The specimens were clamped in the clamping device and the coating rod was rotated at the desired speed. Soon after pressing the rod onto the start-plate the flash started forming, whereupon the feed was started. Coating parameters were derived from earlier experiments conducted with Al 2 O 3 which indicated that the coating thickness increased when high rotational speeds, high axial force and low traverse speed were used. Whereas the bonding strength improved with low rotational speed and low axial force, the traverse speed showed no significant effect. In this previous work it was observed that high axial forces led to micro-cracks beneath the surface which were detrimental to the bonding strength. Thus, the factors influencing the bonding strength should be interpreted with caution. Using these previously published coating parameters (see Atil et al. [6] ) as a starting point, trial runs were conducted, but were unsuccessful. The coating was not consistent and no bonding to the substrate was achieved. In an attempt to improve the bonding strength 124-VOLUME 54A, JANUARY 2023 the axial force was increased which led to spindle stalls because of the greater friction between the coating rod and the substrate. Therefore, the spindle speed was increased, and successful coatings operations were then achieved. As a result of the low traverse speeds, the coating tended to be inconsistent over the complete length of the substrate. By increasing the traverse speed by 25 pct uniform deposition of the coating material was accomplished. Because of the low quantity of the specimens the varying parameters have been reduced whereby the axial force has been fixed at the maximum value (i.e., 2493 N). D. Bonding Strength The bonding strength was measured with an adhesion tester (DeFelsko Co., Model: Positest AT-A). The dolly had a defined stepped area on the bottom which was glued onto the coating with a high strength adhesive (HTK Ultrabond 100, HTK GmbH, Germany). Using a hollow-core drill this area was separated from the rest of the coating to make sure that only this specific area was pulled off when performing the test. Figure 5 shows one of the specimens and the detached dolly after the pull-off test. E. Coating Thickness The coating thickness was measured with a height measuring instrument (Digimar-817, Mahr GmbH, Germany) on four points with a probe (Figure 6(a)) and the mean value found. Figure 6(b) shows the specimen and the corresponding measurement points. F. Design of Experiments (DoE) As with previous work, Design of Experiments (DoE) tools were used to investigate the effects of the coating parameters on the coating thickness and bonding strength. To minimize the quantity of specimens needed the process parameters have been reduced to two, and a two-level full factorial design was chosen. To identify if curvature in the response is present center points were added. By using only two input factors, one center point and four replications, the total number of successful specimen tests required was calculated to involve 22 samples. These can be summarized as four parameter sets with four replicates and one additional parameter set used for the center points with six replicates. Table II shows the parameters used for the new experiments. The objective of the experiments conducted was to identify the effects of the process parameters, rotational speed and traverse speed, on the response, bonding strength and coating thickness. These process parameters are related to physical variables such as temperature and pressure which in turn influence the achievable bonding types. The DoE analysis is, in this case, a means-to-an-end to investigate the relationship between process parameters, physical variables and binding mechanisms. The results of the experiments were fitted into a regression model which can also be used for predicting new sample data. G. Thermal Shock Despite the high-thermal shock resistance of the Si 3 N 4 substrate material (i.e., R ¼ 768 K [31] ) additional thermal shock tests were conducted. For this purpose special test equipment was used where Figure 7(a) shows the apparatus and Figure 7(b) the specimens placed on the sample holder. The test geometry was chosen so that the thermal shock tests reflected the loading case during the coating experiments and measured £ 25 mm 9 10 mm. These were cut from a 25 mm diameter rod, lapped to remove the sinter skin and irregularities caused by the cutting process; the perimeter was chamfered to eliminate the effect of stress concentrations due to defects introduced during material preparation. The samples were heated up to the various required thermal shock temperatures (i.e., representative friction surfacing temperature of 600 K discussed in Section III-C, thermal shock parameter of first type 770 K, and maximum temperature 1000 K) with a heating rate of 10°C/min in the tube furnace. To achieve a uniform temperature through the material bulk, the specimens were held in the furnace for a period of 10 minutes after reaching the desired temperature where upon they were quenched in the quench medium (i.e., water at 22°C). After a period of 3 minutes the specimens were removed from the bath and dried in a laboratory oven at 120°C for 2 hours and set aside for cooling down at ambient air conditions (22°C). A dye penetration test (MR-313DL, MR Chemie GmbH, Germany) was carried out to improve the visibility of the cracks. Figure 8 shows a tested specimen with applied dye penetration test. The specimens were cleaned with a universal cleaner (MR-79) then penetrant was applied onto the surface (MR-313DL) with any excess removed using water. A developer (MR-70) was applied onto the surface to improve the visibility of any cracks that had been formed. The penetrant will indicate a wider and deeper crack by showing a wider and more intensive dye color on the surface of the specimen. [32] As the developer conceals the crack path it was removed to permit further analysis of the specimens. A. Coating Thickness In the samples prepared as previously described, the coating thicknesses were seen to vary from 0.76 to 2.03 mm; depending on the process parameters used. To simplify the presentation in the tables and equations, abbreviations for rotational speed (r) and traverse speed (f) have been used. Using the response data of the experiments a simple multiple linear regression model (i.e., least squares fit) was developed. The calculated model for the coating thickness is as follows: Looking at Figure 9 the results at the five parameter sets used are shown as dots along with the calculated linear regression surface derived from the model. Examining the plot, five datapoints (P1, P3, P9, P14 and P18) can be identified which have relatively high residuals (i.e., high deviation from the model prediction). Checking the parameter set repetitions, response and surface structure of the coating, datapoint P9 was identified as an outlier. A visual examination of the specimen evidenced a smeared (coating) surface structure which could have been a result of spindle speed fluctuations at lower rotational speeds. This was not observed on any of the repetitions for this parameter set and thus was removed from the analysis, whereas datapoint P1 had a large residual non-representative measurement attributed to a burr on the coating. No reasons for dismissing datapoints P3, P14 or P18 were found: the surface structure and coating measurement showed no evidence of error. But the fact that all the points with a traverse speed of 400 mm/min and rotational speed of 4000 rpm shows a high deviation from the model prediction, indicates that using this parameter set the process becomes unstable. To quantify the significance of the parametric change on the coating thickness an analysis of the variance was made which is shown in Table III. P values higher than the confidence level (a ¼ 0:05) are assumed to not have a significant effect on the response. [33] As can be seen, only the rotational speed has a significant effect on the coating thickness showing a P value < 0.001, whereas the traverse speed and the two-way interaction of both parameters does not have a significant effect. Looking at the lack-of-fit P value, which is relatively high at 0.948, indicates that the regression model fits the underlying data. [34] Also no curvature was detected (i.e., P value <0:05). Checking the model summary, R-sq (a measure for the accuracy of the regression model) is relatively high with 71.66 pct; but lower than what was achieved in previous work using an aluminum oxide substrate [6] (81.32 pct). By removing datapoint P9 and correcting datapoint P1 (i.e., remeasuring) the R squared value increased by 13 pct to R-sq = 84.65 pct. Using the corrected response data of the experiments including center points (CtPt), the calculated model (i.e., linear least squares fit) for the coating thickness is as follows: Table II), or 0 otherwise; shifting the regression model to fit the data at this midpoint. By constructing a Normal Probability Plot the distribution of the data were assessed to be normal. To get a better view of the interactions between the parameters an interaction plot for the coating thickness was constructed (see Figure 10). Evaluating the plot, it can be seen that at higher rotational speeds the coating thickness decreases, and the influence of the traverse speed is not significant. While at lower rotational speeds the process becomes unstable and, because of the wide error band, the influence of the traverse speed to the coating thickness can not be determined reliably. This is in marked contrast with what was reported for aluminum oxide-aluminum specimens where it was discovered that at higher rotational speeds and low traverse speeds coating thickness was the greatest. [6] This was explained by the increase of frictional area at the inner diameter of the rod due to the removal of the core material so forming a thick-walled tube. By doing so, an extra space was created at the center of the rod for the flash to flow and form, which in turn increases the frictional area. The current experimental results would seem to validate this reasoning, as by not removing the inner diameter of the coating rod no additional area for contact and disposal of material is present. Thus, increasing the rotational speed only increases the flash material. This is also in line with what has been published concerning friction surfacing of metals. [35,36] B. Bonding Strength Tests of bonding strengths made on the specimens were found to vary from 1.8 up to 42.5 MPa. The calculated simple regression model (i.e., linear least squares fit) for the bonding strength is as follows: Bonding strength ¼ À 82:7 þ 2:27 Â 10 À2 Á r þ 18:1 Â 10 À2 Á f À 5:00 Â 10 À5 Á r Á f ½7 Figure 11 shows the 3D-scatterplot and linear regression surface of the results. Unusual observations can be seen at datapoint P4, P8, P15 and P19 (i.e., high deviation from the model prediction). Examination of the coated specimens and their corresponding dollies provides no satisfactory evidence for dismissing these points. Table IV shows the corresponding P values and the model summary. It can be seen that both test parameters of rotational speed and traverse speed, as well as their two-way interaction, are significant (P value < 0.05). Again, no curvature was detected (i.e., P value < 0.05) and lack-of-fit is not significant. Compared with earlier findings from aluminum oxide-aluminum specimens which showed an R-sq value of only 24.52 pct, [6] this model summary has a relatively high R-sq value of 80.20 pct; so the regression model accuracy increased significantly. ½8 It was previously noted that cracking of the substrates due to high coating temperatures and low thermal shock resistance led to poor data correlation and a low R-sq value. By using Si 3 N 4 as substrate material, which has the highest thermal shock resistance of all technical ceramics, the lack of post-test surface cracks would seem to confirm this assumption. The acquired data would not have been influenced by an unaccounted factor, namely cracking, and so showed a greater consistency. Essentially, what was being tested in the pull-off tests was previously not always a separation of the coating from the substrate; but between layers of the substrate due to micro-cracks beneath the surface. Data distribution was classed as normal and checked by utilizing a Normal Probability Plot. As stated above all factors and their two-way interactions have a significant effect on the response. To get a better view of these interactions an interaction plot (see Figure 12) was constructed. It can be seen that traverse speed has a slightly greater impact on the bonding strength than the rotational speed; but they are otherwise quite similar. Looking at the plot it is clear that increasing the rotational speed will increase the bonding strength, whereas increasing the traverse speed will decrease the bonding strength. Highest values for the response can be achieved by increasing the rotational speed and decreasing the traverse speed. This is most likely caused by the heat generated during the process. By increasing the rotational speed, and similarly by decreasing the traverse speed, more heat is generated in the friction zone: this surplus heat can be transferred to the interface increasing the bonding strength. This is in line with publications in the area of friction welding of ceramics where higher rotational speeds tend to increase the bonding strength. [37,38] In conclusion to the DoE analysis, it can be said that using a ceramic substrate material with a high thermal shock resistance is beneficial for the process. High bonding strength and coating thicknesses can be achieved without damaging the substrate. C. Thermal Analysis This section considers an evaluation of the thermal conditions prevalent during friction surfacing which is not directly controlled by the input parameters r and f, but is a peculiarity of the process itself. According to Liu et al., [30] who applied contact melting theory to friction surfacing, the temperature at the interface reaches a stable plateau during friction surfacing which is close to the melting point of the coating rod because an equilibrium is established between melting and solidification of the material. According to Edelman et al. [39] and Brener et al., [15] who studied interfacial reactions of Al/Si 3 N 4 systems, AlN-like layers can occur and increase in thickness when increasing the temperature up to 600°C. So, in order to evaluate likely chemical reactions it is necessary to establish the magnitude of the peak interface temperatures during the surfacing process. In our case this may affect bonding strengths and has been analyzed in this section. Again, it follows the methodology previously used and detailed in a past article. [6] Figure 13 shows two examples of the temperature readings for the tests made at 6000 rpm, and a traverse speed of 200 mm/min and 400 mm/min (0-distance traveled is the datum at the center of the starting plate). It can be seen that the lower traverse speed (blue plot, left hand side) shows a higher maximum temperature, but the readings are in a similar range when compared with the higher traverse speed (i.e., 567.08°C and 584.37°C) with a temperature difference of only 17.29°C. By repeating the temperature tests at the specified parameters, minimum/maximum values and the corresponding mean were determined. The mean value for three specimens tested at 6000 rpm and 200 mm/min was calculated to be 585.60°C with a standard deviation of ±3.60 K whereas three specimens tested at 6000 rpm and 400 mm/min was calculated to be 563.56°C with a standard deviation of ±12.97 K. The results from specimens tested at 6000 rpm and a traverse speed of 400 mm/min demonstrate a larger spread which was assessed as being due to movement of the thermocouples. In addition, higher traverse speeds reduce the thickness of the quasi-liquid layer between the rod and the substrate which could lead to increased friction when a greater portion of this layer is deposited on the surface, with shearing forces affecting the thermocouple readings. On the other hand, lower traverse speeds lead to an increase in heat generated at the friction zone and an increase in the thickness of the quasi-liquid layer which, in turn, increases the temperature readings. Looking at the ternary phase diagram of the coating material (AlMgSi0.5) there is no significant change in the percentage of the liquid phase at the logged min/max temperatures. As stated by Liu et al. [30] this is due to the fact that the friction zone is not located at the substrate surface, where the temperature was measured, but above the deposited coating. The quasi-liquid layer is formed between the friction zone and the deposited material, and consists of liquid and solid material alike. Thus, lowering the traverse speed may increase the temperature at this layer increasing the thickness of the quasi-liquid layer. D. Thermal Shock Analysis Using Si 3 N 4 with a thermal shock resistance parameter of first type R of 768 K no cracks have been found in the substrate and the pull-off dollys showed no evidence of surface breakouts remaining on the deducted areas. To get a more accurate overview of the thermal shock behavior, further thermal shock tests with DT of 600 K, 770 K and 1000 K were conducted. Due to the rapid change in temperature during the coating process, differential expansion and contraction can induce stresses in to the substrate leading to a weakening of the grain boundaries and crack growth, negatively affecting the strength. [40] Figure 14 shows the tested specimens. To increase the visibility of the cracks a dye penetrant test, as described in Section II-G, was carried out. Shifting focus to the findings in Section III-C, temperatures upto 585.60°C ± 3.60 K were measured during the coating process. These temperatures seem to have no detrimental effect on the material. This behavior is confirmed by the thermal shock tests carried out at DT ¼ 600 K (Figure 14(a)) which show no evidence of crack formation. Increasing the temperature up to the 130-VOLUME 54A, JANUARY 2023 thermal shock parameter of first type R ¼ 768 K (Figure 14(b) with DT ¼ 770 K) cracks started to form. Further tests revealed that an extreme temperature change with DT ¼ 1000 K, exceeding the thermal shock parameter of first type by 330 K, will drastically increase the number of initiated cracks, see Figure 14(c). Although the material used has a thermal shock resistance of 768 K, cracks were observed in the tested specimens for this temperature range. This is related to the defect distribution within a volume and the failure probability of ceramics which will be discussed in more detail. According to Carter et al. [2] mechanical properties of ceramics are dependent on the defect size and distribution and can vary within a material: this is described by the Weibull distribution. [41] By transferring the Weibull distribution to material strength, the failure probability (P f ) can be calculated as follows [42] : where r is the measured flexural strength, m the Weibull modulus and r 0 the characteristic strength for a failure probability of 63.2 pct. These values are determined by the manufacturer during production and are most often listed in the datasheet accompanying the material. Using these values a failure probability can be calculated for a desired flexural strength or vice versa. Due to the fact that the defect distribution and thus the mechanical strength in ceramics is dependent on the volume of the tested specimens, part geometry is regulated by standards such as ASTM C 1161-18, [43] ASTM C 1499-19, [44] and DIN EN 843-1. [45] Upon consultation with the manufacturer (3M Deutschland GmbH, Germany), the Weibull modulus (m ¼ 13:3) and the characteristic strength for the used Si 3 N 4 ceramic material (r 0 ¼ 827MPa) was provided. These values were determined by the manufacturer using rectangular beam specimens and a four-point bending apparatus. Due to the different specimen geometry used for the thermal shock tests, these values could not be used directly. According to Lube et al. and Danzer et al. [46,47] uniaxial and biaxial strength tests for Si 3 N 4 specimens follow the same volume dependency. The characteristic strength of the beams can be used to predict the characteristic strength of the cylindrical specimens. Two effects need to be considered for a comparison. Because of the different stress states for an uniaxial and biaxial load, in the first step, an equivalent stress (r 0;eq ) needs to be calculated for the reference characteristic strength (r 0;beam ). This is done using the principle of independent action [48] and is given by the following equation: where r I , r II and r III are the principal stress components. For an uniaxial stress state only one principle stress component is involved thus: For cylindrical ball on three balls test specimens, Danzer et al. [47] observed that a biaxial stress state is generated and used Eq. [10] to determine the equivalent characteristic strength. They demonstrated that, accounting for volumetric effects (see below), the characteristic strengths from ball on three balls tests were directly comparable to those determined from bending tests. According to Staudacher et al. [49] results obtained from different biaxial strength measurements (i.e., ring on ring test and ball on three balls test) deliver comparable results. For the thermal shock test considered in this work, sudden chilling of the cylindrical test piece will generate a similar biaxial stress state on the surface. Thus, in the second step, the influence of the volumetric effect needs to be considered. To transfer the strength values to another part geometry (i.e., volume), the following equation was used [50] : where V 0 eff is the effective volume of the tested specimen with the corresponding characteristic strength r 0 , V eff the effective volume of the part to which the strength value is transferred to and r the transferred characteristic strength. The strength in both cases is related to the effective volume and thus can be used to extrapolate the characteristic strength for a different volume. The effective volumes can be calculated as follows [51] : For the four-point bending test: where L i is the inner load span, L o is the outer load span, b is the section breadth of the beam and d the section depth. For the cylindrical specimens used in this work: where D L is the diameter of the loading ring, D S the diameter of the supporting ring, D the disc diameter and h the disc height. Using Eqs. [12], [13], [14] and part geometry data (i.e., L i ¼ 20 mm, L o ¼ 40 mm, b ¼ 4 mm, d ¼ 3 mm, D L ¼ 5 mm, D S ¼ 11 mm, D ¼ 25 mm, h ¼ 10 mm) the characteristic strength for the tested specimens was calculated. This value can be used to estimate the failure probability of the thermal shock specimens using Eqs. [1] and [9[. The results are listed in Table V. The calculated values reflect the results from the thermal shock tests. For a thermal shock of DT ¼ 600 K a failure probability of 4.33 pct was estimated. As already mentioned examining the specimens for this temperature no cracks were found. Whereas the specimens with an induced thermal shock of DT ¼ 770 K showed first signs of crack formation. None of these specimens remained undamaged. This is attributed to the high failure probability of P f % 70:57 pct at this temperature and explains why a thermal shock of 770 K causes crack initiation. For the specimens quenched at DT ¼ 1000 K a drastic increase in crack formation was observed. This is also in line with the estimated failure probability of P f % 100:00 pct. The material is not capable of withstanding these stresses. Examination of the specimens revealed that the form of the cracks exhibit signs of radial propagation, starting from the specimen surface and stretching towards the center of the samples with additional fractures running through the material. This is due to the rapid cooling that first takes place on the specimen surface of the material making contact with the quenching medium. While the core of the material still retains a high temperature, the outer surface cools down abruptly. This induces tensile stress at the specimen surface and compression stress at the center where a combination of the local stress state and material structures provide a crack initiation site. Because of the mixed fracture type, volume defects as well as surface defects are most likely the cause for failure of the tested specimens at 770 K. Whereas additional crack initiation through grain boundary weakening is most likely the reason for failure at 1000 K. It should be noted that the thermal shock temperature indicated in the datasheet can be misleading. This is most often calculated for the mean tensile strength and is characterized by a very high failure probability which should be taken into account for high temperature applications of the selected material. Figure 15 shows a SEM image of the interface for the aluminum alloy coated Si 3 N 4 substrate using 6000 rpm and 200 mm/min as coating parameters. Using secondary electron contrast on the SEM the surface structure at the interface can be seen. Preparation of the specimen proved to be difficult because the ductile aluminum showed tendency to smear onto the hard surface of the silicon nitride substrate during the polishing process. This also led to a step formation at the interface which can be identified by the brighter contrast in the image. Changing the detector type on the SEM from secondary electron to backscatter, showing material contrast, reveals a clear separation of the interface without evidence of a diffusion zone (Figure 16(a)). Figure 16(b) shows an energy dispersive X-ray spectroscopy (EDX) analysis of the substrate-coating interface. As can be seen at the interface nitrogen is missing at Point 3, with the aluminum content dropping to 16 pct and silicon content increasing to 84 pct. This could be an indication for a chemical reaction and forming of new compounds; however, this could not be verified by further SEM examinations as nitrogen, which is an element of low atomic mass, is extremely difficult to detect by SEM and EDX-Analysis. [52] Measuring the elements at different positions revealed that only 2 out of 10 measurements reproduced this result. E. Microscopy and EDX Analysis To overcome the difficulties described above (e.g., material preparation, interface image resolution, nitrogen detection) further analysis of the interface and bonding mechanisms has been conducted by scanning transmission electron microscopy (STEM) and EDX. Looking at Figure 17 it can be seen that the aluminum coating forms a bond with the Si 3 N 4 substrate at several positions (1)(2)(3)(4). These spots have been analyzed in more detail. Comparing the bright field (BF) images of the individual spots in Figures 18(a) through (d) one major peculiarity is apparent in all areas; a glassy phase connecting the interface. Whereas spots 1 to 3 are still connected, spot 4 shows a hook-like structure which has been separated. This could be due to the different thermal expansion coefficients of the constituent materials and the resulting stresses induced during the cooling phase after the coating process. EDX analysis of the spot 1 and spot 4 glassy phase reveals a high silicon and oxygen content with a smaller percentage of aluminum and nitrogen as shown in Figures 19(a) and (b). respectively. It should be noted that C, Pt and Ga signals are due to sample preparation and the Cu signal originates from the Cu TEM-Grid used. To identify the composition of the interface region further analysis is needed; therefore HRTEM imaging has been used. No distinct lattice structure could be perceived. The composition is predominately amorphous and the EDX is showing remains of both materials (i.e., coating and substrate). This leads to the assumption that amorphous SiAlON forms during the process at the interface, bonding the coating to the substrate. To ensure that the compounds were not already present on the surface before the coating process, additional samples (i.e., before and after coating process) were analyzed. Figure 20(a) shows a SEM image of the Si 3 N 4 substrate before the coating process and Figure 20(d) after the coating process with the coating removed. Quantitative results of the EDX analysis can be seen in Figures 20(d) and (e) , whereas the EDX spectra are shown in Figures 20(c) and (f) respectively. It should be noted that C and Au signals are due to sample preparation. For easy comparison bar plots have been used for the quantitative mass percent results. Comparing the green bars which represent the oxygen mass percentage and the blue bars which represent nitrogen mass percentage it can be seen that the surface oxides and nitrides increase significantly after the coating process. This can also be seen in the EDX spectra (Figures 20(c) and (f)). The element peak intensity of oxygen and nitrogen increases after the coating process. According to Potts et al., [53] these peaks are approximately proportional to the mass percent of the elements measured. This confirms the assumption that the compounds were not already present on the substrate surface before the coating process and that amorphous SiAlON forms during the coating process at the interface. Further analysis of the specimens was conducted by removing the coating from the substrate and analyzing the substrate surface. Figures 21(a) through (e) show these specimens and their corresponding coating parameters (i.e., 4000-6000 rpm and 200-400 mm/min). Upon examination of the specimens that had their coatings pulled-off, dark areas were identified on the substrate surfaces; of which an increase in size and quantity was observed with increasing rotational speed. Whereas an increase in traverse speed showed the opposite effect, namely a reduction in size and quantity. EDX analysis of the dark areas, in comparison to the bulk material, showed no change in composition. Despite decreasing the acceleration voltage of the SEM, thus reducing the interaction volume, a change in element mass percentage was not detectable. This was attributed to the thickness of the remaining dark layer. In contrast, a different result emerged from the analysis of the pull-off dollys. Figure 22(a) shows the SEM image of a pull-off dolly with the spot locations for the EDX analysis. Checking the quantitative results of the EDX analysis in Figure 22(b) at spot 4 and spot 3 the aluminum alloy used for manufacturing the dollys can be identified. Whereas spot 2 and spot 1 show a high silicon, aluminum, oxygen and nitrogen content which can also be seen in the EDX spectrum graph for spot 1 (see Figure 22(c)). Looking at spot 4 and spot 1 with a higher magnification a change in the surface structure is revealed (Figures 23(a) and (b)). This is in line with the TEM and EDX results (see Figures 19(a) and (b) ), again confirming that SiAlON forms during the coating process at the interface. Examining the surface structure and corresponding dollys of the coated specimens no interface reaction was observed for low rotational speeds and high traverse speeds. Whereas by increasing the rotational speed and lowering the traverse speed an increase in bonded area was observed (see Figures 21(a) through (e)). As already mentioned in Section III-B this is most likely caused by the heat generated during the process. By increasing the rotational speed and decreasing the traverse speed, more heat is generated in the friction zone: this surplus heat is transferred to the interface supporting chemical reaction and thus increasing the bonded area. According to Brener et al. [15] who studied interfacial reactions in Al/Si 3 N 4 thin-film systems, Al can reduce Si 3 N 4 forming AlN and Si even at low temperatures and low annealing periods (i.e., 550°C for 100 minutes). Also Si 3 N 4 grows native oxides which are terminated by Si-OH groups on the surface when exposed to water. [54] This hydrated substrate surface can serve as an oxide source for further reactions and formation of SiO 2 at the interface. Additionally, SiO 2 is used as an sintering additive and also segregates at the Si 3 N 4 grain boundaries during the sintering process. [2] If these are exposed to the coating material additional reactions (e.g., SiAlON) can occur and form bonds at the interface. Despite the fact that in this study no heat treatment has been carried out and the samples were only exposed to higher temperatures for a short period of time (i.e., 10 seconds), forming of new compounds at the interface has been observed. This could be due to the applied axial force and partly oxide-free coating material (Al) coming in contact with the Si 3 N 4 surface inducing a reduction reaction forming SiAlON. Figure 24 shows a schematic diagram of the friction surfacing process, to illustrate the formation of the interfacial phases and oxides. The core material in the coating rod is not exposed to oxygen; material flow and axial force during the process brings this material in contact with the substrate surface leading to reduction of the ceramic substrate and forming of compounds even at low temperatures. Additional experiments have been conducted by heating aluminum specimens upto 650°C and pushing Si 3 N 4 substrates onto the surface, trying to join the parts. Due to the surface oxides of the aluminum part no reaction or bonding could be achieved. This also underlines the assumption that oxide free coating material coming in contact with the ceramic substrate is key to achieving a bond between these dissimilar materials. For further examination of the interface composition X-ray diffraction analysis (XRD) was considered but deemed to be unsuitable because the sub-micrometer reaction zone can not be isolated properly to irradiate sufficient volume for the analysis. IV. CONCLUSION In this study the potential of a new low-cost, reliable and robust coating technique for ceramics based on the technique of friction surfacing has been identified. Bonding strengths and related mechanisms have been analyzed and relationships with the coating parameters have been established. Interface reactions and bonding mechanisms have been identified. Results can be summarized as follows: Experiments conducted on the material combination Si 3 N 4 and AlMgSi0.5 as a coating reveal that an appropriate thermal shock resistance is crucial for this type of coating process. Using Si 3 N 4 no cracks have been found in the substrate. Additional thermal shock tests revealed crack formation at a quenching temperature of 770 K and 1000 K which was attributed to volume and surface defects as well as grain boundary weakening due to the induced thermal stress. R-squared values of 84.65 pct for the coating thickness and 80.20 pct for the bonding strength have been achieved. This indicates that the coating process is stable across the range of input parameters used. Depending on the parameters used, coating thicknesses up to 2.03 mm and bonding strengths up to 42.5 MPa were achieved. STEM, EDX and HRTEM analysis revealed formation of a glassy phase at the interface consisting of predominantly amorphous SiAlON. No interface reaction was identified for low rotational speeds and high traverse speeds. Whereas by increasing the rotational speed and lowering the traverse speed an increase in interface reaction and bonded area was observed. Partially oxide-free coating material (Al) is exposed to the substrate surface during the coating process leading to reduction of the ceramic substrate and forming of compounds even at low temperatures. In summary, friction surfacing of ceramics is a one step process, provides similar strength to thin-film metallizations (e.g., PVD) [16] and does not require any atmospheric control or a furnace. It is also suited for low quantity production and deposition rates exceed those of physical vapor deposition by a 1000x magnitude. The applied coating can be used as is (e.g., heatsink) or as a bonding agent for further process steps such as recasting, welding or brazing.
v3-fos-license
2023-01-08T05:15:28.953Z
2023-01-01T00:00:00.000
255499742
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1996-1944/16/1/417/pdf?version=1672579914", "pdf_hash": "f71bd75692d6ac34b19e45b7985578971d59f42c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:42", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "f71bd75692d6ac34b19e45b7985578971d59f42c", "year": 2023 }
pes2o/s2orc
Estimation of Stiffness of Non-Cohesive Soil in Natural State and Improved by Fiber and/or Cement Addition under Different Load Conditions The aim of this study was to compare the stiffness of gravelly sand under various load conditions—static conditions using the CBR test and cyclic conditions using the resilient modulus test. The tests were conducted on natural soil and soil improved by the addition of polypropylene fibers and/or 1.5% cement. The impacts of the compaction and curing time of the stabilized samples were also determined. The soil was sheared during the Mr tests, even after fiber reinforcement, so the resilient modulus value for the unbound sand could not be obtained. The cement addition improved Mr, and the curing time also had an impact on this parameter. The fiber addition increased the value of the resilient modulus. The CBR value of the compacted gravelly sand was relatively high. It increased after adding 0.1% fibers in the case of the standard compacted samples. The greater fiber addition lowered the CBR value. For the modified compacted samples, each addition of fibers reduced the CBR value reduced the CBR value. The addition of cement influenced the CBR increase, which was also affected by the compaction method and the curing time. The addition of fibers to the stabilized sample improved the CBR value. The relationship Mr=f(CBR) obtained for all data sets was statistically significant but characterized by a large error of estimate. Introduction The soil and granular material embedded in the base and subbase of pavements are subjected to a large number of loads at stress levels considerably below their shear strength. Under a single load on a moving wheel, pavements mainly respond in an elastic way. Nevertheless, indivertible plastic and viscous stresses may accumulate with repeated loading [1], and the thickness of the pavement layers is of great importance for maintaining flexible pavements. The impact of repetitive loadings on pavements was explained in 1955 [2], and a new term, "resilience", was introduced. The resilient response of granular material as a modulus of resilience (later known as the resilient modulus), as the ratio of repeated deviator stress in the triaxial compression test to the recoverable (resilient) axial strain, was first defined by Seed et al. [3,4]: applied. It should be noted that the tests should be performed in devices with the possibility of cyclic loading, because only then is it possible to assess the effect of stress and strain accumulation. Cyclic triaxial test procedures have been standardized; the most influential ones are the AASHTO T307 [5] and EN 13286-7 [6] standards. It should be emphasized that the European Standard [6] concerns only unbound mixtures; bound mixtures are still estimated by means of the static elastic modulus. To date, many researchers have examined the impacts of numerous factors on the resilient modulus of several types of soil. The effects of the confining pressure and deviator stress, load frequency and duration, amount of load cycles, density, graining, and soil saturation were characterized in detail by Brown [1] and Lekarp et al. [7]. The main factors influencing the resilient modulus of unbound non-cohesive [4,[8][9][10][11][12] and cohesive soils [13][14][15][16] are applied axial stress and confining pressure, but their impacts are different. In the case of non-cohesive soils, the resilient modulus increased slightly with confining pressure and significantly with repeated axial stress. The resilient modulus depended on the number of loading cycles and their frequency [17]. Tanimoto and Nishi [13] stated that, after a large number of repetitions, the resilient strains of silty clay reached a constant value after modifying the soil structure. Tang et al. [12] assessed the repetition amount of about one hundred cycles. The changeability of the accumulated plastic strain was bigger with a greater number of load cycles, a higher amplitude of dynamic stress, and a lower confining pressure the resilient modulus of cohesive soil with low plasticity [18,19] increased with an increase in matric suction and relative compaction. Nowadays, the mechanistic design methods for pavements and pavement layers require the resilient modulus of unbounded pavement layers in order to establish layer thickness and the whole system response to traffic loads. However, the resilient modulus testing procedure is considered to be complicated, and regional road laboratories do not have cyclic triaxial apparatuses at their disposal. Therefore, the relationship between the resilient modulus value and other parameters, often even available in databases, is sought. Statistical regression models can be divided based on [20]: (I) a single strength or stiffness parameter; (II) physical soil parameters and stress state; (III) a stress invariant or a set of stress invariants; and (IV) constitutive equations for the estimation of resilient modulus values based on soil's physical properties incorporated into the model parameters in addition to stress invariants. One such single parameter of the first group is the California Bearing Ratio (CBR). The CBR test has been commonly applied in granular material and soil testing in road laboratories for about eighty years. The CBR parameter is defined as the ratio of the unit load (in percent), which is used so that a standardized piston may be pressed in a soil sample to a certain depth at a rate of 1.25 mm/min and standard load, corresponding to the unit load needed to press the piston at the same rate into the same depth of a crushed rock at standard compaction. In many countries, the method based on CBR remains the primary method of pavement design or even the recommended method for characterizing subgrades [1]. It needs to be mentioned that the CBR value does not reflect the shear stress caused by repeated traffic loading. The shear stress depends on several factors, none of which is fully controlled or modeled in the CBR test. However, CBR values are strongly related to compaction characteristics, so the CBR test can be applied as a method of assessing earthworks [21,22]. The CBR value is used to evaluate the subgrade or subbase penetration resistance, and it can be employed to assess the resistance to static failure. The most frequently quoted in the literature relationship M r = f (CBR) is the formula given by Heukelom and Klomp [23]: Formula (2) has been converted into SI units. It should be mentioned here that the relationship was not originally related to the resilient modulus obtained in laboratory tests but to the dynamic modulus, which was determined based on vibration wave propagation. Nevertheless, this relationship is commonly used in the form of the relationship M r = f (CBR). Although the regression coefficient provides the best fit with CBR values from 2 to 200, many researchers have limited the CBR values to less than 10 or 20. Many researchers have found that the relationship of M r = f (CBR) underestimates the M r value for lower CBR values and overestimates it for higher CBR values. A detailed discussion of the evaluation of the dependence of M r on CBR was carried out by Dione et al. [24]. Farell and Orr [25] confirm that Equation (2) overestimates the stiffness of fine-grained soil, especially at high CBR values. They believe that, at low CBR values, CBR corresponds to the stiffness of the material while at high values of its strength. An underestimation of the M r value of soaked gypsum sand [26] and an overestimation in the case of a soil-fly ash mixture have been found using other relationships M r = f (CBR) proposed in the literature [27]. The authors of this study concluded from their previous research that it may not be possible to provide the relationship of M r = f (CBR) for non-cohesive soil because the cyclic triaxial test can destroy unbound non-cohesive soil [28,29]. Thus, the aim of this study was to compare the stiffness tested under different load conditions in a typical non-cohesive soil used for road base and subbase construction-postglacial gravelly sand. Static and cyclic loads, represented by CBR and resilient modulus tests, were considered. The soil was tested as unbound and hydraulically bound by the addition of 1.5% cement, choosing the minimum amount that could improve the resilient characteristics of the tested soil. As the authors' previous experiences have shown that non-cohesive soil can be sheared during the cyclic triaxial test, it was also decided to test soil samples reinforced with 18 mm-long polypropylene fibers to improve the tested material [30]. Thus, the behaviors of four different materials were considered-gravelly sand, fiber-reinforced gravelly sand, cement-stabilized gravelly sand, and cement-stabilized and fiber-reinforced gravelly sand. The impacts of compaction methods, standard and modified Proctor tests, and the curing time of the stabilized samples were also established. Materials Studies were conducted on two research samples of non-cohesive soil and cementstabilized soils, as well as their mixtures with different quantities of polypropylene fibers. Figure 1 illustrates the grain-size distribution curves of the different samples of the tested soils based on sieve analyses according to the EN 933-1 standard [31]. The estimated material is a coarse soil, with sand as its primary fraction and gravel as its secondary fraction. The tested soil is gravelly sand (grSa) in accordance with the EN ISO 14688-1 standard [32]. The values of the coefficient of uniformity (C U ) and the coefficient of curvature (C C ), calculated based on the grain-size distribution curves of the research samples I and II, were 5.45 and 0.87, and 5.04 and 0.66, respectively. The tested soil can be assessed as poorly graded based on the EN ISO 14688-2 standard [33]. The tested soil meets the standard requirements [34,35] for subbase or base materials with a lower percentage of fine fractions, which is suitable in frost areas. Gravelly sand is a Pleistocene glaciofluvial soil characterized by a variability in the relief surface related to the high dynamics of the sedimentary environment and the variety of mineral compositions of post-glacial soils. Gravelly sand consists of well-rounded quartz crumbles, as well as angular grains, with a considerable amount of lytic particles and feldspars [28,36]. The compaction parameters, optimum water content (w opt ), and maximum dry density (ρ d max ) were established in accordance with the standard Proctor (SP) and modified Proctor (MP) methods, following the EN 13286-2 standard [37]. The compaction curves of the two different research samples of gravelly sand, along with saturation lines, are shown in Figure 2. In the cases of both research samples, for the soil compacted by means of a higher compaction energy, the degree of saturation, (S r ) is slightly higher than that for the soil compacted by means of a lower compaction energy. Sample I, with slightly better graining parameters, (C U )and (C C ) is characterized by a greater density obtained at a lower water content in both compaction methods, standard and modified, than that of sample II. that for the soil compacted by means of a lower compaction energy. Sample I, with slightly better graining parameters, (CU)and (CC) is characterized by a greater density obtained at a lower water content in both compaction methods, standard and modified, than that of sample II. The hydraulically bound mixture of soil and cement was created with the addition of 42.5R Portland cement in the amount of 1.5% of the dry mass of the cement to the dry mass of the soil in an examined sample. An attempt was also made to test the soil samples and the soil-cement mix with dispersed reinforcement in the form of polypropylene fibers. Thus, 18 mm-long fibers were used, which were added at amounts of 0.1%, 0.2%, and 0.3% in relation to the dry mass of the compacted soil. The polypropylene fibers are shown in Figure 3. The soil and polypropylene fiber mixtures were mixed by the means of a laboratory mechanical stirrer, which is of great importance for the homogeneity of the tested samples. that for the soil compacted by means of a lower compaction energy. Sample I, with better graining parameters, (CU)and (CC) is characterized by a greater density obt a lower water content in both compaction methods, standard and modified, than sample II. The hydraulically bound mixture of soil and cement was created with the a of 42.5R Portland cement in the amount of 1.5% of the dry mass of the cement to mass of the soil in an examined sample. An attempt was also made to test the soil and the soil-cement mix with dispersed reinforcement in the form of polypropy bers. Thus, 18 mm-long fibers were used, which were added at amounts of 0.1% and 0.3% in relation to the dry mass of the compacted soil. The polypropylene fi shown in Figure 3. The soil and polypropylene fiber mixtures were mixed by the of a laboratory mechanical stirrer, which is of great importance for the homogenei tested samples. The hydraulically bound mixture of soil and cement was created with the addition of 42.5R Portland cement in the amount of 1.5% of the dry mass of the cement to the dry mass of the soil in an examined sample. An attempt was also made to test the soil samples and the soil-cement mix with dispersed reinforcement in the form of polypropylene fibers. Thus, 18 mm-long fibers were used, which were added at amounts of 0.1%, 0.2%, and 0.3% in relation to the dry mass of the compacted soil. The polypropylene fibers are shown in Figure 3. The soil and polypropylene fiber mixtures were mixed by the means of a laboratory mechanical stirrer, which is of great importance for the homogeneity of the tested samples. The compaction parameter (ρd max and wopt), void ratio at maximum compaction (e), and specific dry density (ρs) values of all the tested materials are presented in Table 1. For both compaction methods, it can be noted that the value of the void ratio, (e) decreases with an increase in the cement addition to the mixture. The specific dry density slightly increases with the addition of the cement to the mixture. However, the addition of the polypropylene fibers generally does not influence the specific dry density because of their low mass. The maximum dry density increases, while the void ratio decreases after the addition of the polypropylene fibers to the gravelly sand or to the sand and cement mixture. Generally, the optimum water content decreases thereafter. California Bearing Ratio Test The California Bearing Ratio (CBR) is defined as follows: where p is the unit load used to press a standardized piston in a soil to a specific depth at a rate of 1.25 mm/min, and is the unit load needed to press the piston at the same rate into the same depth of a crushed rock at standard compaction. The CBR laboratory tests were conducted on the samples of the gravelly sand and its mixtures with 1.5% cement and/or various percentages of polypropylene fibers in the amounts of 0.1%, 0.2%, and 0.3%. The percentage represents the dry mass of the additive The compaction parameter (ρ d max and w opt ), void ratio at maximum compaction (e), and specific dry density (ρ s ) values of all the tested materials are presented in Table 1. C-cement addition, F-fiber addition. For both compaction methods, it can be noted that the value of the void ratio, (e) decreases with an increase in the cement addition to the mixture. The specific dry density slightly increases with the addition of the cement to the mixture. However, the addition of the polypropylene fibers generally does not influence the specific dry density because of their low mass. The maximum dry density increases, while the void ratio decreases after the addition of the polypropylene fibers to the gravelly sand or to the sand and cement mixture. Generally, the optimum water content decreases thereafter. California Bearing Ratio Test The California Bearing Ratio (CBR) is defined as follows: where p is the unit load used to press a standardized piston in a soil to a specific depth at a rate of 1.25 mm/min, and p s is the unit load needed to press the piston at the same rate into the same depth of a crushed rock at standard compaction. The CBR laboratory tests were conducted on the samples of the gravelly sand and its mixtures with 1.5% cement and/or various percentages of polypropylene fibers in the amounts of 0.1%, 0.2%, and 0.3%. The percentage represents the dry mass of the additive per the dry mass of the soil in a specimen. For hydraulically bound material, dry soil was mixed with the fibers and cement by means of a laboratory mechanical stirrer; then, water was added to gain a moisture content relating to w opt (see Table 1). The samples were compacted at optimum water contents using the standard Proctor (SP) and the modified Proctor (MP) methods in CBR molds. The CBR tests were performed on the samples directly after compaction (hydraulically unstabilized) and on the samples compacted and cured for 7 and 28 days at constant humidity and a temperature of 20 • C to avoid drying. The tests presented in this paper were performed only on unsoaked samples to enable comparisons of the test conditions during the CBR and resilient modulus tests. The samples were loaded following the ASTM D1883 standard [38], with a recommended load of 2.44 kPa (4.54 kg) in the static penetration tests. Figure 4a shows a loaded sample in a CBR mold prior to testing. The larger CBR value was accepted as a result calculated based on the piston resistance at a given depth: 2.5 or 5.0 mm. The results obtained were collected using a computer program. per the dry mass of the soil in a specimen. For hydraulically bound material, dry soil was mixed with the fibers and cement by means of a laboratory mechanical stirrer; then, water was added to gain a moisture content relating to wopt (see Table 1). The samples were compacted at optimum water contents using the standard Proctor (SP) and the modified Proctor (MP) methods in CBR molds. The CBR tests were performed on the samples directly after compaction (hydraulically unstabilized) and on the samples compacted and cured for 7 and 28 days at constant humidity and a temperature of 20 °C to avoid drying. The tests presented in this paper were performed only on unsoaked samples to enable comparisons of the test conditions during the CBR and resilient modulus tests. The samples were loaded following the ASTM D1883 standard [38], with a recommended load of 2.44 kPa (4.54 kg) in the static penetration tests. Figure 4a shows a loaded sample in a CBR mold prior to testing. The larger CBR value was accepted as a result calculated based on the piston resistance at a given depth: 2.5 or 5.0 mm. The results obtained were collected using a computer program. In accordance with the requirements of the EN 13286-47 standard [39], immediate bearing index tests were also carried out, i.e., CBR tests without a load on the sample. The tests of the hydraulically bound mixtures in this case were carried out no later than 90 min after mixing. Resilient Modulus in Cyclic Triaxial Apparatus The resilient modulus, (Mr) is expressed by the following formula: where is the amplitude of the applied cyclic axial stress, and is the relative resilient (recovered) axial strain. Laboratory tests were executed in the cyclic triaxial test apparatus on the gravelly sand and its mixtures with 1.5% cement and/or various percentages of polypropylene fibers in the amounts of 0.1%, 0.2%, and 0.3%. The percentage represents the dry mass of the additive per the dry mass of the soil in an examined sample. At first, the dry In accordance with the requirements of the EN 13286-47 standard [39], immediate bearing index tests were also carried out, i.e., CBR tests without a load on the sample. The tests of the hydraulically bound mixtures in this case were carried out no later than 90 min after mixing. Resilient Modulus in Cyclic Triaxial Apparatus The resilient modulus, (M r ) is expressed by the following formula: where σ cyclic is the amplitude of the applied cyclic axial stress, and ε r is the relative resilient (recovered) axial strain. Laboratory tests were executed in the cyclic triaxial test apparatus on the gravelly sand and its mixtures with 1.5% cement and/or various percentages of polypropylene fibers in the amounts of 0.1%, 0.2%, and 0.3%. The percentage represents the dry mass of the additive per the dry mass of the soil in an examined sample. At first, the dry components were mixed using the laboratory mechanical stirrer; then, water was added to obtain a moisture content corresponding to w opt (see Table 1). The cylindrical specimens, of 70 mm ID and 140 mm high, were compacted by impact in three layers in a bipartite mold to obtain the maximum dry density values found using the standard Proctor (SP) and the modified Proctor (MP) tests (Table 1), and they were then relocated to the triaxial chamber. The M r tests were conducted on the specimens directly after compaction (hydraulically unstabilized) and after 7 and 28 days of curing at a constant temperature and humidity. An image of a sample prepared for testing is shown in Figure 5. Materials 2023, 16, x FOR PEER REVIEW 7 components were mixed using the laboratory mechanical stirrer; then, water was a to obtain a moisture content corresponding to wopt (see Table 1). The cylindrical speci of 70 mm ID and 140 mm high, were compacted by impact in three layers in a bip mold to obtain the maximum dry density values found using the standard Procto and the modified Proctor (MP) tests (Table 1), and they were then relocated to the tr chamber. The tests were conducted on the specimens directly after compaction draulically unstabilized) and after 7 and 28 days of curing at a constant temperatur humidity. An image of a sample prepared for testing is shown in Figure 5. In the cyclic triaxial apparatus, the confining pressure and axial load were p pneumatically. The machine used repeated cycles of the haversine-shaped load p where the load pulse lasted 0.1 s, and the rest period lasted 0.9 s. The variations sample height in the course of the tests were measured using external LVDT displace transducers. The test settings and the results found were controlled and saved by a puter program. An image of a sample in the triaxial chamber prior to the cyclic shown in Figure 4b. The specimens were exposed to cyclic loading in order to establish the resilient ulus according to the AASHTO T307 standard [7]. Table 2 describes the sequen 15 data for the subgrade material. Sequence "0" is the conditioning of the specimen number of cycles for this sequence was 500-1000 cycles; for all following sequences, constant and equal to 100. In the subsequent fifteen sequences, the confining pre ranged from 20.7 to 137.9 kPa, and the maximum axial stress ranged from 20.7 to kPa. The value was calculated for sequences from 1 to 15 as the average value previous five cycles of each load sequence. In the cyclic triaxial apparatus, the confining pressure and axial load were put on pneumatically. The machine used repeated cycles of the haversine-shaped load pulse, where the load pulse lasted 0.1 s, and the rest period lasted 0.9 s. The variations in the sample height in the course of the tests were measured using external LVDT displacement transducers. The test settings and the results found were controlled and saved by a computer program. An image of a sample in the triaxial chamber prior to the cyclic test is shown in Figure 4b. The specimens were exposed to cyclic loading in order to establish the resilient modulus M r according to the AASHTO T307 standard [7]. Table 2 describes the sequence 0-15 data for the subgrade material. Sequence "0" is the conditioning of the specimen. The number of cycles for this sequence was 500-1000 cycles; for all following sequences, it was constant and equal to 100. In the subsequent fifteen sequences, the confining pressure ranged from 20.7 to 137.9 kPa, and the maximum axial stress ranged from 20.7 to 275.8 kPa. The M r value was calculated for sequences from 1 to 15 as the average value of the previous five cycles of each load sequence. Table 3 presents the California Bearing Ratio test results obtained for the gravelly sand and the sand with the 1.5% cement addition. Two different research samples taken from the same deposit were tested. All types of specimens were tested alone or improved by polypropylene fibers, with a length of 18 mm, in the amounts of 0.1%, 0.2%, and 0.3% in a mass ratio related to the dry soil mass. The specimens were compacted by means of the standard or modified Proctor method at the optimum water content. The hydraulically bound specimens were tested immediately after compaction or after 7 or 28 days of curing. The samples were tested unloaded (immediately after compaction) or loaded at 2.44 kPa. The gravelly sand without any improvement showed relatively high CBR values, which were higher under the minimum load of 2.44 kPa than unloaded. These values are 25.9-37.0% and 56.4-90.2% for the standard and modified Proctor compaction methods, respectively. Higher CBR values were obtained for sample I, which was characterized by a slightly better particle size distribution (higher values of graining coefficients, C U and C C ) and, hence, a higher density after compaction. The addition of 0.1% polypropylene fibers caused an almost 2-fold increase in the CBR value of the samples compacted using the standard method. Increasing the amount of fibers to 0.2 and 0.3% reduced the CBR value to the value obtained without the addition of fibers. The addition of 0.1% fibers to the samples compacted using the modified method resulted in an approximate 10% decrease in the CBR value, and the increase in the amount of fibers to 0.2 and 0.3%-further resulted in a slight decrease in the CBR value. A reduction in the CBR value after the addition of the fibers occurred despite the improvement in the compaction of the reinforced sand (see Table 1). Results and Discussion The results of the non-cohesive soil reinforced with fibers differ from those of previous tests of the shear strength of reinforced granular soils [30], where good reinforcement results were achieved; however, Yetimoglu and Salbas [40] also found a decrease in the strength of sand with the addition of fibers, especially under higher stress. They found that the reinforcement changed the brittle medium into a more plastic one. An improvement in mechanical properties with an increase in fiber addition was observed in [41] in the case of CBR results; however, these results were found for reinforced clayey soil. The addition of cement in the amount of 1.5% to the gravelly sand improved the CBR values of the cured samples, achieving the values of 68.1-143.9% and 150.9-171.3% for the standard compaction and 82.7-198.7% and 183.6-281.0% for the modified ones after 7 and 28 days of sample curing, respectively. The samples with the cement addition, tested directly after compaction, generally obtained lower CBR values than those without the cement addition. The 0.3% addition of fibers to the cement-stabilized samples improved their CBR values immediately after compaction and after curing for 7 and 28 days. The improvement depended on the duration of the sample curing and the compaction method-in the case of a sample compacted using the standard method and tested directly after compaction, the increase in CBR was about 100%. With the time of curing, this increase reduced to about 50%. In the case of samples compacted using the modified method, this increase reduced from about 50% to 4% after sample curing. The increase in the CBR value after the addition of a hydraulic stabilizer is welldocumented in the literature, mainly for lime-stabilized cohesive soils [42]. Cementstabilized clayey soil with polypropylene fibers was tested by Tang et al. [43]. They stated that increasing the fiber content could weaken the brittle behavior of cemented soil by increasing the peak axial stress and by decreasing the stiffness and the loss of the postpeak strength. Wang et al. [44] also stated that the fibers in lime-stabilized clay mainly acted on the ductility lifting thereof. The values of UCS of fiber-reinforced lime, and cement-stabilized clayey soil increase with an increase in the curing time [43,45]. Table 4 presents the results of the cyclic triaxial tests of the resilient modulus. The gravelly sand and the sand improved with cement and/or polypropylene fibers were tested. A set of samples was prepared for the tests in accordance with description in Table 1. The resilient modulus values found for the unbound gravelly sand in all tests were zero for both the natural and fiber-reinforced soils. The specimen was damaged by shearing after only the first test sequence of about 600 cycles (Figure 6a). The soil fiber reinforcement enabled only one extra sequence of tests to be carried out and the extension of the test to approximately 700 load cycles (Figure 6b). Several tests did not allow for the determination of the M r value of the gravelly sand without the addition of cement, even after fiber reinforcement. The resilient modulus values found for the unbound gravelly sand in all tests were zero for both the natural and fiber-reinforced soils. The specimen was damaged by shearing after only the first test sequence of about 600 cycles (Figure 6a). The soil fiber reinforcement enabled only one extra sequence of tests to be carried out and the extension of the test to approximately 700 load cycles (Figure 6b). Several tests did not allow for the determination of the value of the gravelly sand without the addition of cement, even after fiber reinforcement. The cement additive had a significant impact on the increase in the resilient modulus value. The 1.5% cement addition significantly increased the soil resistance to cyclic loads. The resilient modulus increased from 0 to an average of 340 MPa after 28 days of curing. In the case of the gravelly sand, the soil compaction method was less important, as the results achieved for both compaction methods were similar, although they were generally higher for the specimens compacted using the modified method. For the stabilized samples after 28 days of curing, values equal to 245-349 MPa and 313-513 MPa were obtained for the standard and modified compaction methods, respectively. The effect of the curing time was noticeable, and a longer curing duration resulted in a generally higher . The exceptions were the samples compacted using the standard method with the addition of fibers, where extending the sample curing time from 7 to 28 days worsened the value. This phenomenon should be explored in the future by means of SEM image analyses. The samples stabilized with the 1.5% cement addition achieved resilient modulus values of 140-324 MPa after 7 days of curing and 245-513 MPa after 28 days of curing. Higher values were obtained for sample I, a soil with better graining and compaction. The addition of the fibers to the samples stabilized with 1.5% cement increased the value. Figure 6c presents a gravelly sand sample improved by the addition of fibers and cement and subjected to the resilient modulus test. Table 2). Figure 7 shows the different failure modes of the samples damaged during the resilient modulus test or after the cyclic test and quick shearing. The gravelly sand sample was sheared during the cyclic test, and it is characterized by the barreling mode with a single inclined failure plane (Figure 7a). After the test and quick shearing, the cementbound soil failure mode was vertical through the failure plane (Figure 7b), as in the case of brittle material, whereas the plane of destruction in the soil sample improved by the cement and fiber addition was close to horizontal (Figure 7c). The horizontal failure plane indicates that the fiber-reinforced material entered its plastic zone. Both cement-stabilized samples were tested after seven days of curing. Table 2). The cement additive had a significant impact on the increase in the resilient modulus value. The 1.5% cement addition significantly increased the soil resistance to cyclic loads. The resilient modulus increased from 0 to an average of 340 MPa after 28 days of curing. In the case of the gravelly sand, the soil compaction method was less important, as the results achieved for both compaction methods were similar, although they were generally higher for the specimens compacted using the modified method. For the stabilized samples after 28 days of curing, M r values equal to 245-349 MPa and 313-513 MPa were obtained for the standard and modified compaction methods, respectively. The effect of the curing time was noticeable, and a longer curing duration resulted in a generally higher M r . The exceptions were the samples compacted using the standard method with the addition of fibers, where extending the sample curing time from 7 to 28 days worsened the M r value. This phenomenon should be explored in the future by means of SEM image analyses. The samples stabilized with the 1.5% cement addition achieved resilient modulus values of 140-324 MPa after 7 days of curing and 245-513 MPa after 28 days of curing. Higher values were obtained for sample I, a soil with better graining and compaction. The addition of the fibers to the samples stabilized with 1.5% cement increased the M r value. Figure 6c presents a gravelly sand sample improved by the addition of fibers and cement and subjected to the resilient modulus test. Figure 7 shows the different failure modes of the samples damaged during the resilient modulus test or after the cyclic test and quick shearing. The gravelly sand sample was sheared during the M r cyclic test, and it is characterized by the barreling mode with a single inclined failure plane (Figure 7a). After the M r test and quick shearing, the cement-bound soil failure mode was vertical through the failure plane (Figure 7b), as in the case of brittle material, whereas the plane of destruction in the soil sample improved by the cement and fiber addition was close to horizontal (Figure 7c). The horizontal failure plane indicates that the fiber-reinforced material entered its plastic zone. Both cement-stabilized samples were tested after seven days of curing. sheared during the cyclic test, and it is characterized by the barreling mode with a single inclined failure plane (Figure 7a). After the test and quick shearing, the cementbound soil failure mode was vertical through the failure plane (Figure 7b), as in the case of brittle material, whereas the plane of destruction in the soil sample improved by the cement and fiber addition was close to horizontal (Figure 7c). The horizontal failure plane indicates that the fiber-reinforced material entered its plastic zone. Both cement-stabilized samples were tested after seven days of curing. The test results indicate a positive impact of the cement addition and the hardening time on the cyclic resistance of non-cohesive soil. In the case of cement-stabilized cohesive soils, an increase in the M r value has also been found after adding cement and extending the curing time [46,47], as in the case of lime-stabilized cohesive soils [48]. However, the resilient response of cohesive soils treated with lime is visible directly after sample preparation for uncured soils [49]. The fiber reinforcement of lime-treated cohesive soil has also been found to improve the M r value [50]. Figure 8 illustrates the statistical dependence of M r on CBR found for bound and unbound gravelly sand, as a dependence of the cyclic and static material stiffness, for the different prepared specimens. For all test results, the statistically valid relationship of M r = f (CBR) was found, where the coefficient of determination R 2 = 0.6988, and the standard error of estimate SEE = 73.23 MPa. Equation M r = 1.30 CBR − 31.78 explains 69.9% of the variance in the value of the M r statistic for the whole data set. As the M r value of the unbound samples was zero regardless of the compaction method and the addition of dispersed reinforcement, which had a significant impact on the CBR value, it was decided to investigate the dependence of M r on CBR with the exclusion of these points. Once these points were excluded, a statistically invalid relationship of M r = 0.31 CBR + 180.57 was found, where the coefficient of determination R 2 = 0.1145, and the standard error of estimate SEE = 62.06 MPa. The relationship of M r = f (CBR) found for all data sets, although assessed as being statistically significant, is not useful from an engineering point of view due to its large standard error of estimate, because M r = 1.30 CBR − 31.78 ± 73.23 MPa. Overall, the unbound gravelly sand as a poorly graded material is non-resistant to cyclical interactions regardless of the compaction method. The fines content, (f C ) interpreted according to ASTM D653 [51] as soil content with a grain size of less than 0.075 mm, was equal to zero for the tested sand. The lack of fines caused a complete lack of coherence between the grains and the damage by shearing at the beginning of the cyclic test. The resistance to cyclic interactions was not significantly improved, even after the addition of the polypropylene fibers as dispersive reinforcement. For gravelly sand with a lack of fines, the formula M r = f (CBR) should not be used. was decided to investigate the dependence of on CBR with points . Once these points were excluded, a statistically invali 0.31 180.57 was found, where the coefficient of determina standard error of estimate SEE = 62.06 MPa. bound gravelly sand as a poorly graded material is non-resistant regardless of the compaction method. The fines content, (fC) in ASTM D653 [51] as soil content with a grain size of less than 0.075 for the tested sand. The lack of fines caused a complete lack of grains and the damage by shearing at the beginning of the cycli cyclic interactions was not significantly improved, even after the a pylene fibers as dispersive reinforcement. For gravelly sand with mula = ( should not be used. The relationship of = ( found for the bound sampl addition is statistically invalid, i.e., cyclic and static material stiffn so = ( should also not be used. It should be noted that Reference [52] found low value for compacted coarse granular material, but AASHTO T307 stan not used. The test was limited to 300 cycles, which could certainly The relationship of M r = f (CBR) found for the bound samples with the 1.5% cement addition is statistically invalid, i.e., cyclic and static material stiffnesses are independent, so M r = f (CBR) should also not be used. It should be noted that Reference [52] found low M r values (but non-zero values) for compacted coarse granular material, but AASHTO T307 standard [7] procedure was not used. The test was limited to 300 cycles, which could certainly affect the test result. In the authors research, gravelly sand could not be damaged by up to about 600 load cycles (Figure 6a). The medium to well-graded crushed limestone aggregates were characterized by high values of M r , greater for material with better graining [20]. The crushed gravel exhibited resistance to cyclic loading assessed using a summary of resilient modulus values taking into account modeled values based on the regulations of the National Cooperative Highway Research Program [53].
v3-fos-license
2019-05-13T13:05:40.557Z
2019-01-01T00:00:00.000
151108109
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23752696.2018.1564880?needAccess=true", "pdf_hash": "073996a436a741670f0d1cd599acd2f5fe24ff70", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:43", "s2fieldsofstudy": [ "Education" ], "sha1": "aff7334d05cf0478519b73159e8543cabbccead6", "year": 2019 }
pes2o/s2orc
Solving the quantitative skills gap : a flexible learning call to arms ! Recent years have witnessed the emergence of a growing literature bemoaning the level of quantitative methods provision within the U.K. Higher Education sector, noting its negative impact upon the subsequent skills of graduates and their preparedness for the workplace. The present paper documents and evaluates an attempt to counter these issues via the introduction of an increasing element of flexible learning on a business and financial forecasting module. Using a mixture of empirical methods, it is shown that flexible learning results in improvements in student performance and ability across a range of metrics. It is argued that ‘broad’ forms of flexible learning can be employed to overcome the concerns of an increasingly negative literature on quantitative methods provision and the subsequent skills levels of students. ARTICLE HISTORY Received 7 June 2018 Revised 16 November 2018 Accepted 27 December 2018 Introduction 'In universities in the USA, Germany, the Netherlands, Belgium and Switzerland students typically develop much better quantitative skills than in even the best UK degree programmes because they are at the centre of the curriculum. Too often in the UK they languish in the margins.' (Count Us In: Quantitative Skills for a New Generation, British Academy, 2015, p. 11) This observation offers a disappointing assessment of quantitative methods in the UK Higher Education sector, one that is reiterated in numerous studies which all ultimately question the preparedness of graduates for the workplace (see, inter alia, British Academy, 2012; MacInnes, Breeze, de Haro, Kandlik, & Karels, 2016;Mason, Nathan, & Rosso, 2015). Redressing this perceived deficit is a complex proposition which cannot be achieved by simply increasing quantitative methods provision content given the numerous factors that impact on student performance and influence engagement with quantitative methods modules. These factors include: the low confidence and anxieties of students in connection with quantitative methods; a perceived steep learning curve; and inconsistent expectations of the level of mathematical content at undergraduate level (British Academy, 2012;Carey, Hill, Devine, & Szücs, 2016;Dawson, 2014;Economics Network, 2012;Hembree, 1990). Given these primarily psychological blocks to the subject, an unconsidered increase in quantitative provision could generate counteracting effects which worsen, rather than resolve, engagement with the subject. This paper argues that flexible learning methods are a vital inclusion to minimise the negative impact of these significant factors when ensuring appropriately high levels of quantitative methods provision. To demonstrate the importance of flexible learning, we target a particularly quantitatively demanding undergraduate module at Swansea University. The module, Business and Financial Forecasting, is based in the School of Management and made available to all finalists. The selection of this module reflects two important issues. First, there is an increased risk of disconnect between student expectations and the module's learning outcomes. While it is available to students from a student population who have a diverse experience in using mathematical analysis, the technical demands of the module are high. Overall, it requires the understanding and application of a myriad of mathematical and statistical methods. Any impact of flexible learning on student outcomes, assuming its positive influence on student engagement, should therefore be apparent in this type of module. Second, given the module is designed for Social Scientists, this research is particularly relevant to the national debate that implicates the Social Sciences as a major area in which quantitative methods are in deficit. This is reflected by the Economic and Social Research Council, Higher Education Funding Council for England, British Academy and Nuffield Foundation co-funded Quantitative Methods Initiative which identifies a deficit in skills and seeks to support improved understanding of quantitative social science. Our governing tactic is to develop methods which meet the high technical demands expected of this type of module in an accessible and engaging manner that addresses the plaguing issue of self-efficacy. It involves the integration of e-learning materials which have been incorporated in the module since its inception in 2013-14. 1,2 These materials (discussed in more detail below) have an interactive design and differ significantly from the traditional material provided in lecture and workshop environments. Importantly, they facilitate multiple objectives: revealing the intuition behind complex statistical methods; displaying the logical processes involved in alternative approaches; and permitting the generation of additional data-based examples to assist learning. Throughout the materials, 'replication' is promoted as a tool to build confidence as clear objectives are presented for students to work towards at their own pace and offer a focus which challenges students' understanding without inhibiting their activities. The incorporation of e-learning has inspired a radical re-organisation of the module. Changes include: the utilisation of a 'flipped classroom' format where contact hours are modified in response to changes in student working practices ahead of, and following, classroom sessions; the incorporation of student-led sessions allowing a non-didactic approach to learning; and, the provision of formative assessment methods to support self-efficacy. This paper is structured to demonstrate how this occurs. The next section locates our research within the general literature and raises issues related to the precarious position of quantitative methods in U.K. higher education. The identified skills gap in quantitative methods is reviewed ahead of discussing associated research into anxiety and student perceptions in relation to mathematical and statistical methods. Section [3] provides a discussion of flexible learning and considers its relationship with the increasingly popular concept of the flipped classroom. Section [4] finds the method detailed, highlighting the structure of the flexible learning processes that are adopted. This permits a unique empirical analysis into the impact of these materials on student performance in Section [5]. Rather than simply consider aggregate performance on the module through time, this analysis investigates the performance of students on this module relative to their performance across all other modules in their final year of undergraduate study. This approach, acknowledging the numerous factors which influence student performance (Pokorny & Pokorny, 2005), addresses the complicating factor of cohort effects. It therefore enables focus on (i) the performance of students on this specific quantitative methods module relative to the performance of these students elsewhere and (ii) how this has changed over time as increased flexible learning materials have been introduced. Additionally, to allow consideration of potential asymmetries in the distribution of module marks, non-parametric testing is provided. This empirical approach allows us to consider possible 'separation' of students across the marking distribution where, for example, a small number of students are located at one end of the range compared to a clustering at the other. As will be discussed, the empirical analysis allows consideration of the underlying motivations of students and the resulting of this upon performance. Section [6] concludes. Quantitative skills in UK Higher Education '..many degree courses have evolved in a non-quantitative direction in order to cope with students' QS [quantitative skills] deficiencies' (Mason et al., 2015, p. 63) The above captures the growing concern presented by the quantitative skills deficit in the U.K. Higher Education sector. This message has been reiterated on numerous occasions in an expanding literature that has generated a consensus that deficiencies in the provision of quantitative methods restricts the skills delivered by tertiary education. The British Academy's Society Counts report of 2012, for example, is downbeat when reviewing the low levels of quantitative skills achieved, particularly within the Social Sciences. The damning account of MacInnes et al. (2016) comments on limited prior quantitative knowledge of students, which is then compounded by subsequent inadequacies of encouragement and opportunity to cement quantitative skills. These concerns were similarly expressed a year previously in the British Academy's Count Us In report of 2015. Any deficiency in quantitative training must be seen as a pre-cursor to the employability issues raised in the Department of Business, Innovation and Skills Market Assessment of Public Sector Information report of 2013. Referring repeatedly to a skills shortage in the workplace created by the quantitative methods deficit, this specifically highlights 'a lack of skills and familiarity to work effectively with data' (Department of Business, Innovation and Skills, 2013, p. 34). Clearly, the provision of quantitative methods training and level of resulting skills in the workplace are directly related, a point captured succinctly in the following quote from the Society Counts report: 'Students often graduate with little confidence in applying what skills they do have, which then has knock-on effects for businesses as graduates can be ill-prepared for the data demands of the workplace.' (British Academy, 2012, p. 10). An interesting specific case study into quantitative methods, however, is offered by undergraduate degrees in Economics. While concluding that the quantitative skills deficit is most apparent for the social sciences, the above studies typically note Economics as an exception. Arguably this reflects recognition that technical modules tend to be embedded within all levels of study in Economics. However, despite this positive view, there is nonetheless evidence of deterioration. For example, the Employers' Survey conducted by the Economics Network provides an examination of employers' demands for graduate skills and flags quantitative skills as an increasingly problematic issue. Thus, while the 2014-15 Employers' Survey (Economics Network, 2015) found data-based and IT skills to be vitally important to employers, there is a noted decline in the skill levels of economics graduates relative to the previous 2012 survey (Economics Network, 2012). In addition, there is research which questions how quantitative methods are taught within Economics. For example, Dawson (2014) refers to a 'mathematics problem' (p. 22) which includes the challenge of a 'steep learning curve' (p. 6) in relation to quantitative methods. Further to this, the 2012 National Economics Students Survey refers to student perceptions of unexpectedly high levels of, and heavy emphasis placed upon, mathematics. Rethinking Economics, an international network of students, academics and professionals, go as far to quote Kenneth Boulding's 'Mathematics brought rigor to economics. Unfortunately, it also brought mortis'. Overall, these findings suggest that Economics is not immune to problems associated with quantitative methods training and the difficulties encountered can be used to query the solution to any perceived deficit. The call for action in the Count Us In report is to 'ratchet up the quantitative content of social science and humanities programmes' (British Academy, 2015, p. 15). However, experiences in Economics illustrate that an increase in quantitative content will not create automatic gains. Consequently, rather than just targeting the frequency and volume of quantitative methods provision, the focus here is broadened to consider the nature of the pedagogical methods utilised in the delivery of quantitative methods teaching. The dilemma created by the need to intensify quantitative provision is also starkly evident in the anxiety-performance literature. Here, it can be found that an increase in poorly constructed quantitative provision could be detrimental. The research, spanning three decades, from Hembree (1990) to Dowker, Sarkar, and Looi (2016), presents numerous instances of how anxiety in mathematical and statistical disciplines impairs student performance. Aspects of this discourse focus on curriculum design and reveal how anxiety can be specific to perceptions over the nature of the content: i.e. whether it is more mathematical or statistical (Paechter, Macher, Martskvishvili, Wimmer, & Papousek, 2017). Additionally, the literature highlights issues and biases which could significantly impair goals in key areas such as widening participation. For example, there are notable gender differences in anxiety-performance relationships (Dowker et al., 2016;Macher et al., 2013). Overall, while it is apparent that revisions to quantitative methods provision may be required to ensure an appropriate level and volume of coverage, this must be undertaken in an accessible manner that does not increase anxiety and enhances, rather than damages, self-efficacy. Flexible learning With the advent of the massification of Higher Education, the introduction of tuition fees and the increasing importance of metrics such as league tables and the National Student Survey, alternative teaching methods and their effectiveness have come under increasing scrutiny. A prominent feature of this discussion involves the role of the 'traditional' lecture in modern day delivery. 3 Key papers here include Bligh (2000), Folley (2010) and Gibbs (2013), all of which provide highly critical commentaries of the effectiveness of the lecture format. Limitations to the lecture format are advertised further through problems such as catering for a diverse cohort of learners (Schmidt, Wagener, Smeets, Keemink, & van der Molen, 2015) and the student preference for selfpaced learning (McFarlin, 2008). From this growing literature on the limitations of the traditional lecture format, the leading alternative is currently the flipped classroom. The notion of 'flipping' can be considered as involving an inverted classroom in which computer-based instruction outside the classroom allows face-to-face delivery to become more interactive (Bergmann & Sams, 2012;Bishop & Verleger, 2013). However, despite its intuitive appeal, evidence on the impact of flipping has proved to be mixed; with substantial positive effects (Chen & Lin, 2012), small positive effects (Caviglia-Harris, 2016; Olitsky & Cosgrove, 2016), no effect (Blair, Maharaj, & Primus, 2016;Brown & Liedholm, 2002;Guerrero, Beal, Lamb, Sonderegger, & Baumgartel, 2015;Olitsky & Cosgrove, 2013;Sparks, 2013;Terry & Lewer, 2003) and even negative effects (Kwak, Menezes, & Sherwood, 2015) reported in the literature. In an attempt to understand these mixed findings, Webb, Watson, and Cook (2018) consider potential explanatory factors which may impair the impact on student performance. The two factors considered are the inherent motivations of students and the nature of the method of the flipping employed. Regarding student motivation, Harackiewicz, Barron, Carter, Lehto, and Elliot (1997) and Allgood (2001) are used to recognise that the impact of flipping may be clouded by 'grade targeting' behaviour. Here, students adopt a minimum mark that is deemed to be satisfactory. Rather than generating an improvement in student performance, the pedagogical approach can just make it easier for the student to achieve this 'reservation mark'. Turning to the design of 'flipped' materials, Webb et al. (2018) decompose formats into 'constrained' and 'broad' forms. 'Constrained' flipping relates to the commonly adopted approach of providing standard lecture material online, with lectures recorded via video capture software being a popular choice. The beneficial aspect of this approach is seen to be time freed in the classroom, enabling more interactive activities to be pursued. In contrast, 'broad' flipping occurs when the material provided for study outside of the classroom offers something different to the standard lecture coverage. The results of Webb et al. (2018) for the teaching of economic theory show that it is only 'broad' flipping which improves student performance. Under this approach, e-learning materials are constructed from the outset in an unstructured way. No conclusions are provided and there is no single means to engage with the material. Numerous stand-alone perspectives are presented, allowing students to decide themselves how they should consider the information. Twinning the theory with real-world practical applications, the materials are constructed to empower students in how they engage. The current study adopts an approach which is in keeping with that of Webb et al. (2018). Although it considers a different subject matter, focusing on technical analysis rather than critical analysis, it adopts a broad flipping approach designed to allow student choice in the engagement with the online materials. As such, the views of Watson, Webb, Cook, and Arico (2015), Ryan and Tilbury (2013), French and Kennedy (2017) and Matheson and Sutcliffe (2017) are shared, favouring the integration of both learner empowerment and more inventive use of classroom time. The E-learning approach 'IT and mathematical skills are interdependent', Hoyles, Wolf, Molyneux-Hodgson, and Kent (2002, p. 3) The module developed in this paper involves 30 hours of classroom interaction. We build on this traditional staff-student interaction, by introducing a flexible learning approach based on the provision of extensive e-learning materials. Details of these materials are given in Table 1. In addition to providing a brief title, Table 1 provides the date of publication and the recorded access statistics for each set of materials. 4 While varied in terms of content and focus, all materials are based on providing students with self-contained packages of flexible learning materials which include a complete coverage of a specific topic incorporating explanatory materials, data-based examples, references to published research and interactive computer-based elements. 5 The computerbased interaction presented in the resources is in keeping with Gordon (2014) which describes the manner in which technology creates opportunities for flexible learning. The constructed materials are designed to encompass and extend both lecture and computer workshop activities, while also giving illustrative examples to demonstrate the applied nature of the methods being used. Importantly, and as is explained below, these materials differ from the standard lecture notes or data sets often provided for modules, offering instead something different and complementary in nature. Also, these materials allow students to work through topics at their own pace and at their convenience from any location and hence meet the Advance HE view that flexible learning should enable 'choice and responsiveness in the pace, place and mode of learning' (Ryan & Tilbury, 2013, p. 8). Moreover, as expressed by Hoyles et al. (2002), the computer-based approach adopted also builds on the interdependent nature of mathematics and IT skills, with the development of skills in mathematics working conjointly with the accessible framework provided by IT. This generates further positive employability spillovers: 'the widespread use of Information Technology (IT) in workplaces has not reduced the need for QS [quantitative skills] but rather changed the nature of the skills required (for example, with employees needing to understand the underlying models used by computer software in order to make effective and accurate use of it)' (Mason et al., 2015, p. 10 The e-learning is designed to include self-contained mathematics, derivations and discussion. This enables substantial flexibility in its use by students. It can be employed as reading ahead of lectures or computer workshops; it can be drawn upon to set exercises for use ahead or after classes to illustrate integral issues; revision sessions can be shaped via its use; and students can consult and employ it at any time. It also promotes student control over the nature of lecture and workshop contact hours. This element is apparent also in two weeks that are devoted to recapping of material and an even greater emphasis on application. These weeks are timed to occur at the mid-point and end of the module ahead of the coursework submission and end of year examination respectively. Within these weeks, studentled sessions are held in which the standard teacher-learner hierarchy is side-lined. Drawing upon the insights obtained from the flexible learning materials, students formulate ideas and determine the content of these sessions. As with the design of the e-learning, this learner empowerment is consistent with the learning gains mentioned by Watson et al. (2015) and Ryan and Tilbury (2013). It also allows for a rapid response to student misconceptions and problems ahead of formal assessment, as championed by Berrett (2012). More specifically, the combination of discursive and data-based elements in the e-learning materials allow students to benefit from the following aspects: results are available for students to replicate; step-by-step walk-throughs are provided with illustrative examples; interactive elements are provided to allow immediate consideration of results and how they impact on conclusions; topical examples are used to raise engagement levels; there is a combination of different software packages, with more basic software such as Excel combined with sophisticated packages to allow hands-on analysis; specific examples, with clear-cut results, are used to demonstrate the underlying features of mathematical methods; and there is a focus on a design boosting selfefficacy through the provision of 'hidden intuition'. To illustrate the embedding of 'hidden intuition', we now give more detail of the Forecast Combination e-learning resource. This topic considers how two sets of forecasts of a variable can be merged to create a combined forecast. There are numerous technical matters covered, but one element which proves difficult is the intuition underlying the determination of the relative proportions of each set of forecasts to use to create an improved joint forecast (e.g. is it 90% of one and 10% of another?). Following conventional notation, these proportions are denoted as λ Ã for the second set of forecasts and 1 À λ Ã ð Þfor the first set of forecasts. After application of optimization using calculus, the resulting algebraic expression obtained, as given in Equation (1), is not necessarily straightforward or intuitive: where σ 2 i denotes the variance of the errors of forecast and ρ is the correlation between the two sets of forecast errors. The technical requirements involved in the mathematics leading to (1) are understood by students, given their prior knowledge of concepts such as partial differentiation, variance and correlation. However, interpretation of Equation (1) can understandably prove problematic. For example, how will λ Ã vary if the value of σ 1 changes? This component appears in the numerator and the denominator, it appears as squared and not squared, it is added, and it is subtracted. While the mechanics involved in the derivation of the result are important, a vital issue is the intuition underlying the value of λ Ã . How do the inputs (i.e. the properties of the forecasts used) influence its value? Given the nature of the forecasts considered, what value might be expected? Although standard lecture material will typically fail to address these questions, the interactive nature of this e-learning resource provides students with further insight. Included in the e-learning is a spreadsheet which allows automated calculation of λ Ã for limitless different combinations of the inputs, i.e. ρ; σ 1 ; σ 2 f g . Extremely intuitive results, originally hidden by the complex algebra, therefore come to light. For example, the student can determine thatceteris paribus-greater values of λ Ã are observed for greater values of σ 1 . That is, they can see that when the first set of forecasts is more volatile and-in crude terms-less attractive, we use less of it. While standard lecturing methods focus on the algebra, the e-learning allows for hands-on experimentation which advertises practical application and the mechanics of the procedure. This discussion of the hidden intuition masked by complex algebra is also relevant to a further feature of the e-learning material which allows consideration of the properties of the combined forecast. A screenshot from the interactive Excel file created by student interaction is provided in Figure 1. This aspect of the e-learning, which supplements explanatory text and discussion, uses visual means to encourage a deeper understanding of what is meant by 'optimisation'. The student can see that the calculus involved in the analysis can be viewed as a search procedure associated with a value which minimizes the value of function (i.e. we are simply looking for the lowest point on a curve). Such interactive graph creation can contribute to the promotion of self-efficacy as the interactive nature of the resource allows endless examples and results to be generated via the use of alternative inputs. Consequently, the user can choose values with more straightforward intuition to create more accessible results and once confidence is built, then proceed to more complex cases. As a result, technical material is provided in an intuitive manner with solutions accompanied by graphical presentation, thus mitigating anxiety and focusing on the practical purpose of the alternative quantitative methods that are adopted. The nature of this specific set of materials illustrates the 'complementary' nature of e-learning packages available for this module. That is, the materials are not the typically provided lecture notes or data sets or computer lab worksheets, but instead provide an alternative delivery and presentation of topics designed to develop additional understanding. The above 'forecast combination' example illustrates the use of interactive packages within the e-learning materials to allow the generation of limitless examples of results to permit repeated practice and exposure to varying outcomes. This features in other e-learning resources, with, for example, directional forecasting and forecast evaluation materials containing packages to allow users to input whatever series they wish to automatically generate limitless results in relation to the prediction changes in the movement of variables and forecast accuracy respectively. In other instances, a greater emphasis is placed upon step-by-step presentation and deconstruction of methods so that details and subtleties associated with methods can be identified and explained. However, in general, the e-learning materials provide alternative perspectives on topics which combine what might be considered traditional lecture and lab materials, but in a different format which incorporates interactive elements to create explained examples, provide links to the literature, synthesise alternative software packages and incorporate data-based examples. Therefore, the materials have the additional advantage of offering something new to students who have attended classes while also providing a means for absent students to avoid falling behind, something a more restricted approach to flexible learning such as lecture capture does not offer. Empirical analysis Details of how the flexible learning methods have been embedded over the time frame of our study is provided in Table 2. The sets of flexible learning materials (FLM) have increased from 3 to 8 over the 4 years, leaving only one topic covered on the module without e-learning support. 6 As FLM has increased over the time frame of our experiment, there has been a coinciding increase in the mean module mark (notably increasing from 55.1% to 70.7%), the number of students taking the module (from 15 to 27) and the percentage of 'good honours' outcomes (from 27% to 85%). 7 The only factor that has not seen any trend is the percentage failure rate. Over the four years considered, there is some variation but no overall change from the first to the fourth year (approximately 7%). Given the results in Table 2 and the module's unchanged syllabus during the period considered, the impact of flexible learning upon student performance seems very apparent, particularly as the mean marks and good honours outcomes both remained near constant during the two years with the same application of FLM. 8 However, the outcomes reported could well be influenced by cohort effects, with movements in marks simply reflecting changes in student characteristics and a shift towards a student body with higher innate technical skills. To control for such cohort effects, the marks for each student on this module are examined relative to their marks on all the other modules they undertook in their final year of study. That is, two series are examined for each year: one containing student marks on this module and the other containing the average marks achieved across other modules for these students. 9 The resulting mean difference in these marks, the t-statistic for this mean being zero and the associated p-value for this test are provided in Table 3. These results show that there is a variation in the mean difference through the years. However, in only one year is this difference statistically different from zero. This is the final year in which the full set of FLM coincides with students scoring substantially and significantly higher on this module (9.3%) than they scored elsewhere. Our results therefore reject cohort effects, strengthening the evidence of a substantial improvement in outcomes as a result of the provision of our flexible learning materials. To further understand the nature of the outcomes obtained, the distribution of student marks for each year is also investigated. This analysis is prompted by the NUS Student Experience Report of 2008 which classifies students according to their underlying motivations. It identifies students as 'Next Steppers', 'Toe Dippers', 'Options Openers' and 'Academics'. The labels reflect their underlying motivation for attending University. Are they looking to achieve career goals? Are they looking for experiential benefits from a new life at University? Are they looking to learn interesting subjects or to be stretched academically? This form of attitudinal decomposition leads to consideration of issues which may help understanding of the success or failure of pedagogical innovations. Consider again our reference to 'grade targeting'. An improvement in learning materials, rather than improving outcomes, could simply make it easier to achieve a target level of performance. To explore the distribution of outcomes further and shed some light on these potential satisficing effects, the Triples Test of Randles, Fligner, Policello, and Wolfe (1980) is employed. This test allows non-parametric hypothesis testing of asymmetry in the distribution of marks. By considering potential asymmetry in the mark distribution, we can test whether statistically significant Notes: The above notation refers to: the number of flexible learning materials available during the time of the delivery of the module (FLM); the mean module mark (Mean); the number of students taking the module (No.); the number of students failing the module (Fail); the percentage of students obtaining 'good honours outcomes (GH). 9.3 4.14 <0.01 Notes: The above notation refers to: the average difference between the marks obtained by students on this module and their average mark on all other final year modules taken (Difference); the t-statistic for the null hypothesis of the students' marks on this module differing from their marks elsewhere (t-statistic); the p-value associated with the test of no significant difference in means (p-value). 'separation' of marks occurs where some students lie at one end of the distribution while others are clustered at the other. While complex in nature, the central features of this approach involve the calculation of an asymmetry statisticη ð Þ and an associated test statistic for its significance T ð Þ. 10 The results obtained from application of Triples testing are reported in Table 4. Table 4 shows that there are two years in which significant results are obtained. In both cases negative asymmetry is present. This indicates that there is a longer or flatter left-hand tail in the distribution of marks, corresponding to a small number of low marks at one end of the distribution and a larger number of higher marks at the other. These results reflect the message from earlier findings concerning fail rates, average marks and good honours outcomes as they confirm the statistical significance of the movement towards the higher end of the distribution. Taking the results in the three tables in combination, there is evidence to support the notion that the final year of the study (2016-2017) has a large proportion of students who, rather than undertaking grade targeting, exhibit the motivation of 'academics' in the National Union of Students (2008) terminology and are seeking to maximise their performance. We not only see an increase in the mean module mark, but also a significant negative skew of the marking distribution, reflecting a movement of student outcomes away from lower scores towards higher levels of performance. Conclusion The discussion has centred upon the impact of the introduction of flexible learning on a highly technical quantitative methods undergraduate module. Considering results over a four-year period, our empirical approach finds substantial positive gains have been generated through the adoption of flexible learning techniques. We build on the work by Webb et al. (2018), which identifies the need for a 'broad' application of e-learning methods designed around student empowerment. If the aim is to simply replicate the material expected in the lecture environment, there are unlikely to be any positive effects. However, if there is a significant change in the pedagogical approach, then highly positive outcomes are significantly more likely. Our change in pedagogical approach allows for methods encouraging 'hidden intuition' effects and a positive breakdown of the lecturer-student hierarchy in faceto-face sessions. The empirical analysis adopted shows that this produces substantial gains in learning outcomes. During a period of steady increase in the use of flexible learning, there is strong evidence of increased self-efficacy and increased skills in quantitative methods. Adding to the quantitative methods curriculum is clearly Randles et al. (1980). * denotes significance at the 5% level using the standard normal distribution. important to address the quantitative skills deficit. However, while an unquestioned increase is unlikely to produce the improvements required, we argue that a pedagogical approach incorporating flexible learning methods we can help eliminate the quantitative methods deficit and consequently help meet the skills demands of graduate employers. Notes 1. The term e-learning is used in the present paper to refer to online flexible learning materials beyond the typical and familiar online support offered by Blackboard and similar packages. 2. As the module was introduced in the 2013-14 academic year, it is in its fifth year with four complete years of student outcomes available for analysis at the time of writing. 3. While reference is made herein and within the pedagogical literature to the notion of a 'traditional' lecture, it is recognised that this is a difficult notion for which to provide an exact definition. 4. The first set of materials ('Forecast Evaluation') pre-dates the current module as it was developed for a similar postgraduate module on an Economics scheme which is no longer in existence. 5. These materials are all freely available via the Web links provided in the references. 6. Flexible learning materials of the form available for all other topics are not provided for this final, single topic. Quite simply, the reason for this is that the relevant materials have not yet been finalized for submission for potential publication rather than there being a deliberate policy to avoid doing so due to the nature of the topic or for any other reason. 7. Good honours within the UK education system refers to marks in the upper second and first-class degree classifications. That is, marks of 60% and above. 8. The syllabus has not changed in that the same core topics are still considered. However, data sets and examples have been updated and revised to ensure topicality, in addition to updating the version of the specialist software employed (EViews) as new releases became available. Importantly, these issues have in no way changed the pitch or rigour of the module. 9. Consequently, equal sample sizes are considered for each of the years examined. For example, in the final year considered, marks on this module for 27 students are compared with the average marks across other modules for these 27 students, thus leading to the comparison of two series of 27 observations. As noted by a referee, the level of module enrolment results in relatively small sample analysis being undertaken and results should be considered with this in mind. 10. Further details on the (combinatorial) nature of the Triples testing approach are available upon request from the authors in addition to the seminal paper of Randles et al. (1980). Disclosure statement No potential conflict of interest was reported by the author.
v3-fos-license
2020-07-25T13:12:54.640Z
2020-07-24T00:00:00.000
221321043
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2020.00363/pdf", "pdf_hash": "0c35e7e11f9a7094da869a20c1ebe2f20bcede85", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:44", "s2fieldsofstudy": [ "Medicine" ], "sha1": "0c35e7e11f9a7094da869a20c1ebe2f20bcede85", "year": 2020 }
pes2o/s2orc
Chronic Thromboembolic Pulmonary Hypertension in a Child With Sickle Cell Disease Chronic thromboembolic pulmonary hypertension is a potentially curable form of pre-capillary pulmonary hypertension (PH) resulting from incomplete resolution of pulmonary thromboemboli. We describe an 11-year-old boy with homozygous sickle cell disease with an indwelling catheter found to have severe PH on routine screening echocardiography. The diagnosis was confirmed by CT, ventilation-perfusion scintigraphy, and right heart catheterization. The patient was medically managed until undergoing pulmonary thromboendarterectomy with resolution of his PH. This case highlights the need for pediatric providers to be aware of this underdiagnosed form of PH, particularly for patients at high risk. INTRODUCTION Chronic thromboembolic pulmonary hypertension (CTEPH) is a distinct form of pulmonary hypertension (PH) that results from unresolved acute pulmonary embolism. The disease is caused by mechanical obstruction of the pulmonary arteries by chronic, fibrotic organized thrombi (1,2). It is rarely diagnosed in children and has an unknown incidence in the general population (3)(4)(5). Early diagnosis and treatment are critical, particularly because patients will develop severe PH and right heart failure. When recognized in a timely manner, the disease is often curable by pulmonary thromboendarterectomy (PTE) (6)(7)(8)(9). We report an 11-year-old boy with sickle cell disease and an indwelling venous catheter found to have elevated right ventricular (RV) systolic pressure on routine echocardiography. Further workup led to the diagnosis of CTEPH, and he was successfully treated with PTE. CASE An 11-year-old boy with hemoglobin SS sickle cell disease (SCD) was referred to our hospital for further treatment of CTEPH. He had a history of multiple pain crises, acute chest syndrome, and acute ischemic strokes at ages 3 and 7 years. His SCD was further complicated by moyamoya syndrome, for which he underwent encephalodurosynangiosis at age 7 years. His hypercoagulability workup had been negative for antithrombin deficiency, protein C deficiency, protein S deficiency, factor V Leiden, plasminogen deficiency, and anticardiolipin antibodies. Approximately 10 months before his referral, routine screening transthoracic echocardiogram (TTE) revealed a right atrial (RA) thrombus thought to be related to his Broviac central venous catheter, which had been used for exchange transfusions. He was started on enoxaparin and his Broviac catheter was replaced. TTEs over the ensuing months demonstrated persistent RA thrombus without change in size, with normal RV pressure and normal biventricular systolic function. Seven months later, the patient was electively admitted to the referring institution in anticipation of a bone marrow transplant (BMT), at which time routine TTE showed an elevated RV systolic pressure (58 mmHg plus the RA pressure), a change from his previous echocardiograms. CT scan of the chest at that time revealed multiple bilateral lower lobe and left upper lobe pulmonary emboli. Clinically, he reported mild dyspnea on exertion and exercise intolerance for several weeks. Medical management was started with bosentan and the patient's enoxaparin dose was increased. Follow-up TTE performed 3 months later showed severe PH, with RV systolic pressure of 90 mmHg plus the RA pressure and a corresponding blood pressure of 108/52 mmHg. This raised suspicion for CTEPH, which was supported by a ventilationperfusion (VQ) scan showing multiple areas of wedgeshaped mismatched perfusion defects consistent with chronic bilateral thromboembolic disease and secondary PH consistent with CTEPH. The following day, right heart catheterization demonstrated pulmonary arterial pressure of 80/35 mmHg with a mean of 52 mmHg, pulmonary capillary wedge pressure of 14 mmHg, cardiac index of 5.7 L/min/m 2 , and pulmonary vascular resistance index of 6.6 WUm 2 . Pulmonary angiography revealed multiple areas of abrupt tapering of the pulmonary arteries, confirming the diagnosis. The patient was switched from bosentan to macitentan and riociguat, and he was referred to our center for PTE. At admission, his blood pressure was 116/72 mmHg, heart rate was 114 beats per minute, respiratory rate was 18 breaths per minute, and oxygen saturation was 97% on room air. The result of the physical examination was unremarkable. Quantitative hemoglobin S was abnormal at 13.5%, and NT-ProBNP was elevated at 880.0 pg/mL (normal range, 10.0-242.0 pg/mL). The results of the remaining laboratory tests, including coagulation tests, were normal. He successfully underwent bilateral PTE and removal of a calcified organized thrombus from the right atrium (Figure 1) without reported intraoperative complications. Post-operative transesophageal echocardiography demonstrated an estimated RV systolic pressure of 32 mmHg plus the RA pressure in the setting of a systolic blood pressure of 110 mmHg, with mildly decreased RV function. Central venous pressure was maintained under 8 mmHg with a furosemide infusion to avoid reperfusion injury. PH medications were discontinued at the Abbreviations: BMT, bone marrow transplant; CTEPH, chronic thromboembolic pulmonary hypertension; RA, right atrial; RV, right ventricular; PH, pulmonary hypertension; PTE, pulmonary thromboendarterectomy; SCD, sickle cell disease; TTE, transthoracic echocardiogram; VQ, ventilation-perfusion. time of surgery. Ten days after the procedure, the patient was completely asymptomatic with normal oxygen saturation and was discharged home on long-term warfarin. Four months later, he successfully underwent BMT, after which he had weekly TTEs to monitor for the development of PH (10). As of 6 months following surgery, the patient has remained clinically asymptomatic without echocardiographic evidence of RV hypertension based on tricuspid regurgitant jet and systolic septal position. DISCUSSION To our knowledge, successful PTE in a child with CTEPH and SCD has not been reported in the MEDLINE database to date. Our patient was noted to have unexplained PH on routine screening echocardiography in the setting of a chronic hypercoagulable state and recent RA thrombus associated with a central venous catheter, and his diagnosis was confirmed by lung VQ scan and right heart catheterization. While a recently published case report described a 12-year-old with CTEPH successfully treated with PTE, that patient had severe comorbidities including paraplegia, and he was found to have factor V Leiden and antiphospholipid antibodies during his hypercoagulable workup (11). CTEPH is a rare and life-threatening condition that can result in progressive right-sided heart failure and death. It occurs as a result of unresolved thrombi obstructing the pulmonary arteries. The following criteria are used to make the diagnosis after 3 months of anticoagulant therapy: (1) mean pulmonary artery pressure >25 mmHg with a pulmonary capillary wedge pressure ≤15 mmHg, and (2) at least one (segmental) perfusion defect detected by lung scan, CT angiography, or pulmonary angiography (1, 2, 12). Despite increasing awareness of it, the disease remains underdiagnosed. Studies suggest an incidence of 0.56-3.2% in adult pulmonary embolism survivors, while incidence in the pediatric population is unknown (3)(4)(5). Of note, risk factors for thromboembolism are identified in the majority of children with CTEPH, and approximately one third of patients have a positive family history of thromboembolism or a known hypercoagulable state. Children with lupus anticoagulant and anticardiolipin antibodies are at the highest risk of the disease (9). Other risk factors include splenectomy, infected ventriculo-atrial shunts, thyroid replacement therapy, history of malignancy, chronic inflammatory conditions, and indwelling catheters (13). Our patient did not have a history of an acute pulmonary embolism. Nevertheless, he had several risk factors for CTEPH. Specifically, he had a history of homozygous SCD, which is recognized as a chronic hypercoagulable state with an increased risk of thromboembolic events and PH (14)(15)(16)(17). He also had an indwelling catheter for monthly exchange transfusions and history of catheter-related RA thrombus. Homozygous SCD additionally confers a significant risk of autosplenectomy, for which the patient did not undergo sonographic assessment at our institution (18). Because there are no pathognomonic signs or symptoms for CTEPH, the diagnosis is often delayed or missed. Patients may present with exertional dyspnea, exercise intolerance, and nonspecific abnormalities on physical examination. As the disease progresses, there is a high risk of developing right heart failure. While the natural history of acute pulmonary embolism is nearcomplete resolution of emboli within 3-6 months, the persistence of any signs or symptoms after this duration of antithrombotic therapy warrants further investigation. Diagnostic workup begins with chest radiography, pulmonary function studies, an ECG, and an echocardiogram. If CTEPH is suspected, a lung VQ scan should assess for subsegmental or larger unmatched perfusion defects. Given its high sensitivity, a normal lung VQ scan can effectively rule out the disease, while an abnormal test result prompts further evaluation with right heart catheterization, catheter-based pulmonary angiography, CT pulmonary angiography, or MRI (1,19,20). The first step in management is anticoagulant therapy. Our patient was initially on subcutaneous low molecular weight heparin, and he was later transitioned to an oral anticoagulant. Once the diagnosis was established, he was also started on targeted PH therapy, including macitentan, an endothelin receptor antagonist, and riociguat, a soluble guanylate cyclase stimulator, while awaiting definitive surgery. Riociguat was chosen because it has been shown to improve exercise capacity and pulmonary vascular resistance in patients with CTEPH, and because it is safe and well-tolerated in patients with SCD (21,22). PTE is the treatment of choice for operable patients, and its success has been demonstrated in children (9,23). To be considered operable, a patient must have sufficient surgically accessible thromboembolic material without extensive distal disease (24). Patients with SCD may have additional risks of the PTE, given the need for prolonged cardiopulmonary bypass, deep hypothermia, and intervals of circulatory arrest, factors that increase the likelihood of sickling (25). Balloon pulmonary angioplasty is an emerging option for inoperable CTEPH or patients with recurrent or persistent PH after PTE; however, this approach is rarely used in children and long-term results are lacking (26). In our case, a multidisciplinary team including pulmonary hypertension, cardiology, hematology, critical care, and cardiothoracic surgery specialists reviewed the patient's clinical data and elected to proceed with surgery, which the patient underwent without complication. Given the success of this case, it is important to consider CTEPH in any children with unexplained PH, particularly when risk factors are present. CONCLUSIONS We describe a rare case of CTEPH in a child with SCD and an indwelling catheter who was found to have unexplained PH. CTEPH is a rare and life-threatening disease. Unlike other forms of PH, it is potentially curable with PTE. For this reason, early recognition and treatment are critical. Practitioners should consider this diagnosis in patients with unexplained PH, particularly in patients with risk factors, including but not limited to those with a hypercoagulable state, a history of thromboembolism, or an indwelling catheter. ETHICS STATEMENT Parental informed consent was obtained for the publication of this case report. AUTHOR CONTRIBUTIONS RS was the consulting cardiology fellow for the patient. GV was the referring cardiologist who diagnosed the patient and he contributed references and revisions to the manuscript. KT performed the patient's thromboendarterectomy and he contributed revisions to the manuscript. ER was the precepting attending for RS and she contributed references and revisions to the manuscript. All authors approved the final version. ACKNOWLEDGMENTS We acknowledge the contributions of the many physicians, nurses, and other staff members who cared for this patient but are not named as authors. Additionally, we thank the family for consenting to the publication of this work so that others might learn from this case.
v3-fos-license
2019-04-05T03:27:45.279Z
2014-02-27T00:00:00.000
95427300
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.hoajonline.com/journals/pdf/2055-0898-1-1.pdf", "pdf_hash": "324eb188c4df6fdb83e903962632ee49742b15ee", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:45", "s2fieldsofstudy": [ "Chemistry" ], "sha1": "673103144910dd8f5fc9bd2c790267e8efac22c7", "year": 2014 }
pes2o/s2orc
2-Aminoethyl diphenylborinate (2-APB) analogues: part 2.regulators of Ca2+ release and consequent cellular processes In order to obtain compounds with modified 2-APB activities, we synthesized number of bis-boron 2-APB analogues and analyzed their inhibitory activities for SOCE and IICR. Adducts of amino acids with bis-borinic acid showed the highest activity. The IC50 of 2-APB for SOCE inhibition was 3 μM, while the IC50 of 2051 bis(4,4’(phenyllysineboryl)benzyl) ether was 0.2 μM. By using these compounds, we may be able to regulate Ca2+ release and consequent cellular processes more efficiently than with 2-APB. In 1997, we identified 2-aminoethyl diphenylborinate (2-APB) as being an IP 3 receptor inhibitor which regulates IP 3 -induced calcium release [21][22]. This discovery led to substantial interest as it led to more than 600 citations and more than 1000 studies on 2-APB have been published so far (examples are references 23-37). This was supported by increasing sales of 2-APB by Sigma-Aldrich as membrane-permeable modulator of intracellular IP 3 -induced cellular calcium release. In this study, we aimed to generate better modulator of calcium release than 2-APB. Here we analyzed SOCE and IICR inhibitory activities of our bis-boron compounds collection. The analysis of poly-boron compounds will be reported in our upcoming publications. We believe that by regulating Ca 2+ release and associated cellular processes by boron compounds, we could therapeutically intervene in many diseases, such as heart diseases and Alzheimer`s disease. CrossMark ← Click for updates doi: 10.7243/2055-0898-1-1 added to the mixture of diisopropoxyphenylborane, gradually warmed to room temperature, and then stirred overnight. The reaction was quenched with 1N HCl, the diethyl ether layer was collected, and the water layer then extracted twice with diethyl ether. The combined diethyl ether layers were dried over MgSO 4 and concentrated. The crude residue was purified by flash column chromatography on a silica gel (n-hexane/ EtOAc = 3:1) to give bis-(4,4'-(hydroxyphenylboryl) phenyl) ether 1012 (1122 mg, 3.15 mmol, 58.8%) as an oil. Methods We have assayed the inhibitory activity of the 2-APB analogues for SOCE and IICR using our improved assays described previously [45]. Results and discussion We measured inhibitory activities of the 2-APB analogues for SOCE and IICR. The results are shown in (Supplement Table S1). We can verify the efficiency of these sample by comparing with IC 50 of 2-APB for SOCE inhibition 3 µM, and IC 50 of 919 :best sample of our previous paper (45) Comparison of 2APB, mono-boron compounds and bis-boron compounds The IC 50 of 2-APB for SOCE inhibition is 3 µM. The IC 50 of best mono-boron compound (example 919) at previous paper (45) is 0.2 µM. The IC 50 of best bis-boron compound reporting at this paper (example 1024) is 0.2µM. That is, the bis-boron compound reporting at this paper and mono-boron compound reported at previous paper (45) showed almost same activity and about 10 times stronger activity than 2APB. Mono-boron compounds are easy to prepare. But bis-boron compounds are somewhat difficult to prepare. Comparison of bis-phenyl ether and bis-benzyl ether When we compare compounds mentioned at 3.1 and at 3.2 and when we compare 1022, 4020, 162AE, we can tell that there is not so much difference between bis-phenyl ether type compound (1022) and bis-benzyl ether type compounds (4020,162AE). Comparison of amino acids and ethanol amine As a reagent to add on to the dihydroxy boron compound, we used amino acid 3.1 3.2)and ethanol amine (3.3). Activities of both compounds were quite similar, but the stabilities of the compound obtained are different. Amino acid adducts are much more stable and easy to purify. We recommend amino acids derivatives over ethanol amine derivatives as regulators of Ca 2+ and cellular process. Also among amino acid, basic amino acid having extra amino group or amide group like lysine, ornithine, asparagine, glutamine gave compounds with strong activity. We have synthesized many bis-boron compounds. These compounds showed as active as mono-boron compounds on molar basis. But when we consider on a weight basis, bis-boron compounds are half as active as mono-boron compounds, because the molecular weight of bis boron compound is about twice that of mono-boron compound. These compounds can thus regulates the Ca 2+ release and consequent cellular response more efficiently than 2-APB at pharmacological concentrations. Some of these compounds were shown to inhibit the calcium dependent enzyme transglutaminase [44]. Transglutaminase inhibitors block the abnormal cross-link of protein [43,44] and therefore they might slow down or even stop the progression of diseases caused by misfolded proteins, such as Huntington`s disease. The 2-APB analogues presented in this study could be proven to be excellent lead compounds for many human diseases including heart disorders [59], Alzheimer`s [60,61] and Huntington`s disease [62,63]. We have shown different kinds of active compounds with IC 50 ranging 0.1 to 5 µM. By choosing the compound we would be able to control the release of Ca 2+ and regulate many cellular processes such as secretion, cardiac contraction, fertilization, proliferation, synaptic plasticity, atrial arryhythmiss [31], inhibition of calcium entry channel [25], excitation-contraction coupling in the heart [32], arrhythmogenic action of endothelin-1 on ventricular cardiac myocytes [34], dysreguration of neural calcium signaling in Alzheimer disease [61], Huntington aggregation [62,63] and protein cross-link by transglutaminase [43]. We believe that many investigators will find these reagents regulating Ca 2+ release and related cellular processes very useful. Competing interests The author declares that he has no competing interests.
v3-fos-license
2016-05-12T22:15:10.714Z
2011-10-26T00:00:00.000
14591010
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0026849&type=printable", "pdf_hash": "f89287e457d6352b991b9aeac6ee5496ea88729e", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:46", "s2fieldsofstudy": [ "Medicine" ], "sha1": "f89287e457d6352b991b9aeac6ee5496ea88729e", "year": 2011 }
pes2o/s2orc
Changes in Waist Circumference and the Incidence of Acute Myocardial Infarction in Middle-Aged Men and Women Background Waist circumference (WC) measured at one point in time is positively associated with the risk of acute myocardial infarction (MI), but the association with changes in WC (DWC) is not clear. We investigated the association between DWC and the risk of MI in middle-aged men and women, and evaluated the influence from concurrent changes in BMI (DBMI). Methodology/Principal Findings Data on 38,593 participants from the Danish Diet, Cancer and Health study was analysed. Anthropometry was assessed in 1993–97 and 1999–02. Information on fatal and non-fatal MI was obtained from National Registers. Cases were validated by review of the medical records. Hazard ratios (HR) were calculated from Cox proportional hazard models with individuals considered at risk from 1999–02 until December 30 2009. During 8.4 years of follow-up, 1,041 incident cases of MI occurred. WC was positively associated with the risk of MI, but weakly after adjustment for BMI. DWC was not associated with the risk of MI (HR per 5 cm change  = 1.01 (0.95, 1.09) with adjustment for covariates, baseline WC, BMI and DBMI). Associations with DWC were not notably different in sub-groups stratified according to baseline WC or DBMI, or when individuals with MI occurring within the first years of follow-up were excluded. Conclusions/Significance WC was positively associated with the risk of MI in middle-aged men and women, but changes in WC were not. These findings suggest that a reduction in WC may be an insufficient target for prevention of MI in middle-aged men and women. Introduction Obesity and weight gain are strong risk factors for coronary heart disease (CHD) [1].Weight loss improves the cardiovascular risk factor profile [2;3], but most long-term population based studies suggest an increased risk of CHD with weight loss [4][5][6][7][8].Pre-existing or sub-clinical diseases and high-risk behaviors (as smoking) have been suggested to explain the increased risk of CHD associated with weight loss, but the risk persist after careful adjustment for these factors [4][5][6][7][8]. Individuals differ in their regional distribution of body fat, which have implications for their morbidity and mortality.Anthropometric measures of abdominal fatness (e.g.waist circumference (WC)) appear to be more strongly associated with the risk of CHD than anthropometric measures of general fatness (e.g.body mass index (BMI)) [9][10][11][12][13][14][15][16][17][18][19][20].This has predominantly been attributed to accumulation of intra-abdominal fat, which is strongly associated with cardiovascular risk factors and possibly also with incident CHD [21][22][23].In contrast, anthropometric measures of peripheral fatness are inversely associated with the risk of CHD [24] possibly due to cardio-protective effects of the skeletal muscle and the gluteofemoral fat [25].Furthermore, two recent studies found that weight loss was associated with increased all-cause and CHD mortality [11;26], whereas waist loss was associated with decreased mortality indicating that loss of abdominal fat mass with preservation of other body compartments is beneficial. Although several studies have shown that WC measured at one point in time is associated with the risk of CHD, it is unclear whether changes in WC (DWC) are associated with the risk of CHD.We therefore investigated the association between DWC and the risk of acute myocardial infarction (MI) in a large cohort of middle-aged men and women, and evaluated the influence from concurrent changes in BMI (DBMI). Methods In 1993-97, 160,725 individuals aged 50-64 years with no previous cancer diagnosis were invited into the Danish prospective study 'Diet, Cancer and Health' (figure S1).A total of 57,053 participants accepted the invitation.Participants filled in questionnaires and were clinically examined (569 were later excluded due to a cancer diagnosis, which was not, due to processing delays, registered at the time of the invitation).In 1999-2002, repeated information was collected by questionnaires.Details of the study were described previously [27;28]. Ethics statement The Danish Data protection Agency and the regional Ethical Committee approved the study, which was in accordance with the Helsinki Declaration II.Participants signed a written consent before participating. Outcome Cases of nonfatal and fatal MI (International Classification of Disease (ICD) 8: 410-410.99 or ICD10: I21.0-I21.9)were identified by linkage with the Danish National Patient Register [29] and the Danish Causes of Death Register [30] via the unique identification number assigned to all Danish citizens.Sudden cardiac death (ICD8: 427.27 or ICD10: I46.0-I46.9)was also included if the cardiac arrest was believed to have been caused by MI.From the date of enrolment into the cohort and until December 31st 2003, cases were validated by review of medical records in accordance with the guidelines of the American Heart Association and the European Society of Cardiology [31].From January 1st 2004 and until December 30 2009, and for participants whose medical records had not been available in the previous period, participants with a diagnosis of MI from a hospital ward were accepted as cases without validation, as this diagnosis had a positive predictive value above 90% in the Patient Register [31].All other cases were individually validated by review of diagnoses and procedure codes in the Patient Register and the Causes of Death Register. Exposure In 1993-97, technicians measured the individuals' height (nearest 0.5 cm without shoes) and weight (nearest 0.1 kg using a digital scale, with light clothes/underwear).The WC was measured (nearest 0.5 cm) with a measuring tape at the smallest horizontal circumference between the ribs and the iliac crest (natural waist), or, in case of an indeterminable WC narrowing, halfway between the lower rib and the iliac crest.In 1999-02, individuals received a self-administrated questionnaire and reported their weight (kg) and WC (cm) measured at the level of the umbilicus using an enclosed paper measuring tape.BMI (kg/ m2) was calculated as weight per height squared.Change in BMI (DBMI) (kg/m2) and change in WC (DWC) (cm) was calculated as the value in 1993-97 subtracted from the value in 1999-02. Chronic disease may induce changes in anthropometry and increase the risk of MI [4][5][6][7][8].Chronic disease (yes/no) occurring before examination in 1999-02 was therefore included as a covariate.Chronic disease was defined according to a selection of ICD8 and ICD10 codes representing chronic somatic disease [36].Information on diagnosed diseases was obtained by linkage to the National Patient Register [29] and the National Diabetes Register [37]. Exclusions Individuals for whom questionnaires were incomplete were excluded, and so were individuals with a diagnosis of MI before examination in 1999-02 (figure S1). Misreporting may be most pronounced in individuals with extreme measurements so individuals with values below the 0.5 and above the 99.5 sex-specific percentiles of BMI and WC, and below the 2.5 and above the 97.5 sex-specific percentiles DBMI and DWC were excluded (figure S1).These cut-off values were chosen to reduce the influence of outliers on the associations. Statistical Analyses Analyses were conducted in STATA version 9.2 (Stata Corporation, College Station, Texas; www.stata.com). Hazard ratios (HR) and 95% confidence intervals of MI were calculated from Cox's proportional hazard models.Years since examination in 1999-02 were used as the time axis.Thus, individuals were considered at risk from 1999-02 until time at MI, death from other causes, emigration/disappearance or December 30 2009, whichever came first.Analyses were conducted for each sex separately, and combined if appropriate.Sex differences were formally tested on the multiplicative scale by cross-product terms using Wald tests. BMI in 1993-97 was included as restricted cubic splines (3 knots) [38;39] in models with age in 1999-02, years between examinations and chronic disease.Covariates were added in a second step and WC in 1993-97 was added in a third step.Similar analyses were conducted for WC in 1993-97 with BMI added in the third step, and for BMI and WC measured in 1999-02.The DBMI was included as restricted cubic splines (3 knots) in models with age in 1999-02, years between examinations, BMI in 1993-97 and chronic disease.Covariates were added in a second step and DWC and WC in 1993-97 were added in a third step.Similar analyses were conducted for DWC with BMI and DBMI added in the third step.Splines were plotted to visually assess the shape of the associations, and associations were formally tested by Wald tests.Continuous covariates were also included as restricted cubic splines (3 knots).BMI, WC, DBMI and DWC were also examined as continuous linear variables in models with covariates added in the three steps described above.The proportional hazard assumption was assessed with a test based on Schoenfeld residuals, and no appreciable violations of the assumption were detected. Subgroups analyses To explore if the association between DWC and MI was consistent throughout the range of the DBMI, associations between DWC and MI were investigated in groups with loss (DBMI, = 0) and gain in BMI (DBMI.0).Similarly, the association between DWC and MI was investigated in groups with a high and low WC in 1993-97 (cut-off at the sex-specific median of WC). Atherosclerosis may go undiagnosed for years [40;41] and induce changes in anthropometry.This implies risks of bias due to reverse causality, which we explored by exclusion of cases occurring in the first one to five years of follow-up. Results Between the examinations in 1993-97 and 1999-02, 1,778 individuals died and 460 emigrated/disappeared leaving 54,246 eligible for re-invitation.Among these, 5,865 did not respond, 2,858 did not want to participate and for 1,699 we had incomplete information on anthropometry and covariates.Moreover, 1,006 were excluded due to MI occurring before examination in 1999-02, and 4,225 were excluded due to extreme values on the anthropometric variables.Thus, 17,964 men and 20,629 women were eligible for the current study (figure S1). In 1993-97, the median WC was 95 cm in men and 79 cm in women (table 1).During the 5 years between the examinations, the increase in WC was 3 cm in men and 7 cm in women (table 1).In men, 5,774 (32%) had a loss in WC and 12,190 (68%) had a gain in WC.In women, 3,268 (16%) had a loss in WC and 17,361 (84%) had a gain in WC.The Pearson correlation between BMI and WC was 0.85 in both sexes, and 0.44 and 0.37 between DBMI and DWC in men and women, respectively. During a median follow-up of about 8 years, 739 new cases of MI occurred among men and 305 occurred among women. Single measurements of BMI and WC The association between BMI in 1993-97 and MI was positive in both sexes, but weak after adjustment for WC.For the sexes combined, the HR per one kg/m 2 was 1.03 (1.00, 1.07) after adjusting for covariates and WC (table 2, figure S2).The association between WC in 1993-97 and MI was positive in both sexes, but the association was weak after adjustment for BMI.For the sexes combined, the HR per 5 cm WC was 1.03 (0.97, 1.10) after adjusting for covariates and BMI (table 2, figure S2).Similar results were found for BMI and WC measured in 1999-02 (table 2, figure S3).None of the associations were notably different between men and women (interaction, P.0.5). Changes in BMI and WC The association between DBMI and MI was U-shaped with the nadir of the curve at DBMI = 0 (figure 1).Thus, for those with loss of BMI (DBMI, = 0) one kg/m 2 decrease in BMI was associated with an 11% (HR = 1.11 (1.02:1.22))higher risk of MI, whereas for those with gain in BMI (DBMI.0)one kg/m 2 increase in BMI was associated with an 8% (HR = 1.08 (0.97:1.19) higher risk of MI with adjustment for covariates, DWC, BMI and WC in 1993-97.The DWC was not associated with MI (figure 2, table 3).Among all participants, the HR per 5 cm change was 1.00 (0.94, 1.07) with adjustment for covariates and WC in 1993-97, and 1.01 (0.95, 1.09) with additional adjustment for BMI in 1993-97 and DBMI (table 3).None of the associations were notably different between men and women (interaction, P.0.5). Subgroups analyses The DWC was not consistently associated with MI in the two strata of DBMI.The HR was 1.03 (0.95, 1.12) per 5 cm in participants with concurrent loss of BMI and 0.99 (0.91, 1.09) per 5 cm in participants with concurrent gain in BMI with adjustment for covariates, DBMI, BMI and WC in 1993-97. The DWC was neither consistently associated with MI in the two strata of WC in 1993-97.The HR per 5 cm was 1.05 (0.95, 1.16) in participants with low WC and 1.00 (0.93, 1.08) in participants with high WC with adjustment for covariates, DBMI, BMI and WC in 1993-97. Exclusion of cases of MI occurring within the first to five years of follow-up had no notable influence on the associations between DWC and MI (table S1). Discussion This prospective study of middle-aged men and women showed that WC was positively associated with the risk of MI, but the association was weak after adjustment for BMI.DWC were not associated with the risk of MI, and this association was not altered by adjustment for covariates and DBMI, nor in groups defined according to WC in 1993-97, loss and gain in BMI, or when cases of MI occurring within the first one to five years of follow-up were excluded. The strengths of the study are the large, well-characterized study population with anthropometry assessed at two time points.Selection bias is unlikely to have affected the results, as all study participants were followed after their second measurement of anthropometry until death or end of follow-up, and the number of participants lost due to death was low [29;30].Cases of MI were validated by review of the medical records independently of the collection of anthropometry [31] whereby the risk of information bias is low. Atherosclerosis may go undiagnosed for years [40;41] and its presence may induce changes in anthropometry.This implies risks of bias due to reverse causality, which we aimed to eliminate with our prospective design.We also conducted analyses where cases of MI in the first one to five years of follow-up were excluded.The exclusions had no notable influence on the associations, but we may have had insufficiently long follow-up and too few cases to fully address this.Other diseases may also both induce changes in anthropometry and affect the risk of MI [4][5][6][7][8].We adjusted for chronic diseases [36] diagnosed before follow-up examination in 1999-02, but this had no notable influence on the estimates.The registries used to identify these diseased individuals cover our entire cohort and are valid [29].Individuals with sub-clinical or psychiatric diseases are, however, not identified.We can therefore not fully exclude the influence from such diseases, but find it unlikely that several diseased individuals would participate in a long-term cohort study, which is supported by the low morbidity and mortality in the cohort [27]. Covariates that could have confounding or modifying effects were also included in the study, but these had no notable effects on the associations.Residual confounding from these, or confounding from other risk factors cannot be excluded, but the detailed data available makes it unlikely. Trained technicians measured anthropometry in 1993-97, and measurement problems may have minimal impact on these results.Anthropometry was self-reported in 1999-02, but strong, quantitatively consistent associations between MI and both 1993-97 and 1999-02 measures were observed.This shows that the self-measured data, reported in a questionnaire, are as valid as the examiner-measured data in terms of predicting risk of MI.The use of different methods may, however, still have implications for the analysis and interpretation of changes in WC.A validation study within the cohort [42] found that the mean change in WC was somewhat overestimated in women (2.1 cm) and underestimated in men (0.8 cm).The difference was associated with BMI and WC, but it was concluded that the two measures could be used together in analyses of DWC if the statistical models included BMI and WC [42].Accordingly, we included BMI and WC in analyses of DWC.We also excluded individuals with extreme anthropometric measurements as misreporting may be most pronounced in these individuals.Perhaps more important, information on WC was collected years before information about MI.It is thus unlikely that the over/underestimation of DWC is directly related to MI, which limits the risk of bias.Still we cannot exclude that errors have attenuated the results.DWC was, however, positively associated with all-cause mortality in the cohort [26] and we therefore expect that the used measure of DWC would capture most of the effects on MI. Fatness, and in particular abdominal fatness, is assumed to increase the risk of MI [9][10][11][12][13][14][15][16][17][18][19][20][21][22].This was also shown in our study, as WC was positively associated with the risk of MI.The association was, however, weak after adjustment for BMI as also observed in some previous prospective studies [9][10][11][12][13][14][15][16][17][18][19].Adjustment for BMI in analyses of WC may reduce confounding from overall fatness, but the adjustment does also introduce a substitution aspect in the interpretation of the results.The risk of MI associated with a high WC in the adjusted model may reflect the effects of high amounts of abdominal fat or low amounts of gluteofemoral fat or lean body mass.In this regard, it is noteworthy that anthropometric measures of peripheral fatness are inversely associated with the risk of CHD after adjustment for BMI [24;25]. Changes in WC were not associated with the risk of MI, and adjustment for DBMI had no notable influence on this association possibly due to the modest correlation between DWC and DBMI.Accordingly, our findings suggest that it is not possible to predict the risk of MI associated with changes in WC from the risk associated with differences in WC measured at one point in time.The association with WC at one point in time may reflect lifelong exposure, whereas the risk associated with changes in WC may reflect the individual possibility to modulate such lifelong risk during a short five-year period. A previous study [11] found that DWC were positively associated with the risk of mortality from CHD in postmenopausal women with heart disease, but only among women assigned to hormone therapy and who were in the extreme five percent of the waist change distribution.Estimates adjusted for overall weight change were not shown.The association between DWC and CHD may depend on various factors such as sex, age and health status of the study individuals [4][5][6][7][8][9].Our participants were 50-64 years at baseline.It may hence be suspected that they already had redistributed fat mass to the abdominal fat depots [43] and therefore were too old to influence their risk of MI by modest changes in WC.DWC may also have different impact on morbidity and mortality from CHD with stronger associations for mortality [17], as also indicated in our cohort where DWC were positively associated with all-cause mortality [26].This could explain the differences between these [11] and our findings. In conclusion, WC was positively associated with the risk of MI in middle-aged men and women, but the association was weak after adjustment for BMI.DWC was not associated with the risk of risk of MI, and this association was not notably affected by adjustment for changes in BMI.According to these findings it is not possible to predict the risk of MI following changes in WC from studies where WC is only measured at one point in time.A reduction in WC may hence be an insufficient target for prevention of MI in middle-aged men and women. Figure 1 . Figure 1.Hazard ratios (HR) and 95% confidence intervals (CI) of myocardial infarction (MI) according to changes in body mass index (BMI).Abbreviations: BMI, body mass index.HR, hazard ratio.MI, myocardial infarction Lines are the hazard ratios (shaded areas the 95% confidence intervals) derived from Cox proportional hazard models with changes in changes in body mass index included as restricted cubic splines (3 knots).Reference points are the mean of changes in body mass index.Adjusted for: sex, years between examinations, age, chronic diseases, smoking, Mediterranean diet score, energy intake, education, drinking pattern, sports activity, body mass index in 1993-97, waist circumference in 1993-97 and changes in waist circumference.doi:10.1371/journal.pone.0026849.g001 Figure 2 . Figure 2. Hazard ratios (HR) and 95% confidence intervals (CI) of myocardial infarction (MI) according to changes in waist circumference (WC).Abbreviations: HR, hazard ratio.MI, myocardial infarction.WC, waist circumference Lines are the hazard ratios (shaded areas the 95% confidence intervals) derived from Cox proportional hazard models with changes in waist circumference included as restricted cubic splines (3 knots).Reference points are the mean of changes in waist circumference.Adjusted for: sex, years between examinations, age, chronic diseases, smoking, Mediterranean diet score, energy intake, education, drinking pattern, sports activity, body mass index in 1993-97, waist circumference in 1993-97 and changes in body mass index.doi:10.1371/journal.pone.0026849.g002 Table 1 . Characteristics of the participants. Table 2 . Hazard ratios (HR) and 95% confidence intervals (CI) of myocardial infarction according to body mass index (BMI) and waist circumference (WC). Abbreviations: BMI, body mass index.CI, confidence interval.HR, hazard ratio.WC, waist circumference.*Adjusted for years between examinations, age, chronic diseases, sex (combined analyses).{Adjustedforsmoking,Mediterranean diet score, energy intake, education, drinking pattern, sports activity.{WCadded to analyses of BMI.BMI added to analyses of WC. 1Associations were not notably different in men and women. Table 3 . Hazard ratios (HR) and 95% confidence intervals (CI) of myocardial infarction according changes in waist circumference (DWC).Abbreviations: CI, confidence interval.DBMI, changes in body mass index.DWC, changes in waist circumference HR, hazard ratio.*Adjustedforyears between examinations, age, chronic diseases, waist circumference in 1993-97, sex (combined analyses).Adjusted for smoking, Mediterranean diet score, energy intake, education, drinking pattern, sports activity.Adjusted for changes in body mass index and body mass index in 1993-97.Associations were not notably different in men and women.Associations were accepted to be linear.doi:10.1371/journal.pone.0026849.t003 { { 1
v3-fos-license
2020-08-06T09:07:54.110Z
2019-09-16T00:00:00.000
219965891
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1088/1674-1137/abab90", "pdf_hash": "d17aa33cd6baea02f00e1b131320ddfd24c85e0a", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:47", "s2fieldsofstudy": [ "Physics" ], "sha1": "bdd048230b79671c58a19676b746173794a57ee0", "year": 2020 }
pes2o/s2orc
Equation of state and chiral transition in soft-wall AdS/QCD with more realistic gravitational background Zhen Fang1, ∗ and Yue-Liang Wu2, 3, † 1Department of Applied Physics, School of Physics and Electronics, Hunan University, Changsha 410082, China 2CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, Beijing 100190, China 3International Centre for Theoretical Physics Asia-Pacific (ICTP-AP), University of Chinese Academy of Sciences, Beijing 100049, China (Dated: June 3, 2021) Abstract We construct an improved soft-wall AdS/QCD model with a cubic coupling term of the dilaton and the bulk scalar field. The background fields in this model are solved from the Einstein-dilaton system with a nontrivial dilaton potential, which has been shown to be able to reproduce the equation of state from lattice QCD with two flavors. The chiral transition behaviors are investigated in the improved soft-wall AdS/QCD model with the solved gravitational background, and the crossover transition can be realized. Our study provides a possibility to address the deconfining and chiral phase transitions simultaneously in the bottom-up holographic framework. I. INTRODUCTION As is well known, quantum chromodynamics (QCD) describes the strong interaction of quarks and gluons. Due to asymptotic freedom, the method of perturbative quantum field theory can be used to study high-energy properties of QCD matters in the ultra-violet (UV) region. However, the strong coupling nature of QCD at low energies makes the perturbative method invalid to tackle the nonperturbative problems of QCD. Quark confinement and chiral symmetry breaking are two essential features of low-energy QCD, and the related physics have attracted a great deal of interests since many years ago. QCD phase transition is such a field that has opened a window for us to look into the low-energy physics of QCD [1]. As the temperature increases, we know that the QCD matters undergo a crossover transition from the hadronic state to the state of quark-gluon plasma (QGP), along with the deconfining process of the partonic degrees of freedom and the restoration of chiral symmetry [2][3][4]. Many nonperturbative methods have been developed to study the QCD phase transition and the issues of low-energy hadron physics [5][6][7][8]. As a powerful method, lattice QCD is widely used to tackle the low-energy QCD problems from the first principle. However, there are limitations on this method, such as in the case of nonzero chemical potential, because of the sign problem. In recent decades, the anti-de Sitter/conformal field theory (AdS/CFT) correspondence has provided a powerful tool for us to study the low-energy physics of QCD by the holographic duality between a weakly coupled gravity theory in asymptotic AdS 5 spacetime and a strongly coupled gauge field theory on the boundary [9][10][11]. Large amounts of researches have been done in this field, following either the top-down approach or the bottom-up approach . Holographic studies in the top-down approach have shown that the simplest nonsupersymmetric deformation of AdS/CFT with nontrivial dilaton profiles can reproduce the confining properties of QCD [63,64], and also realize the pattern of chiral symmetry breaking with quarks mimicked by the D7-brane probes [65][66][67][68][69]. However, it is not clear how to generate crossover transition indicated from lattice QCD in the top-down framework. Actually, AdS/CFT per se is inadequate to give a complete description for thermodynamics of QCD because of its semi-classical character inherited from the type IIB string theory in the low-energy approximation and the large N limit. The string-loop corrections have to be considered in order to give an adequate account for thermal QCD. Nevertheless, the qualitative description by such a holographic approach is still meaningful and indeed has provided many insights in our study of low-energy QCD. It has been shown that the deconfinement in the pure gauge theory corresponds to a Hawking-Page phase transition between a thermal AdS space and a black hole configuration [70][71][72]. However, many works from bottom-up approach tell us that we can use a bulk gravity system with a nontrivial dilaton profile to characterize the equation of state and the deconfining behaviors of QCD [73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90]. Moreover, unlike the holographic studies of pure gauge theory, the crossover transition in these bottom-up models is only related to the black hole solution solved from the Einstein-dilaton(-Maxwell) system, which is contrary to the usual claim that the black hole is dual to the deconfined phase at high temperatures. As we cannot expect to make use of two distinct bulk geometries to generate a smooth crossover transition in AdS/QCD [27], it seems more natural to give a description on thermal QCD properties only in terms of the black hole solution. However, in this case we must make sure that the black hole is stable when compared with the thermal gas phase. In the bottom-up approach, the soft-wall AdS/QCD model provides a concise framework to address the issues on chiral transition [18]. However, it has been shown that the original soft-wall model lacks spontaneous chiral symmetry breaking [18,28]. The chiral transitions in the two-flavor case have been studied in a modified soft-wall AdS/QCD model, where the second-order chiral phase transition in the chiral limit and the crossover transition with finite quark masses are first realized in the holographic framework [91,92]. In Ref. [93], we proposed an improved soft-wall model which can generate both the correct chiral transition behaviors and the light meson spectra in a consistent way. The generalizations to the 2 + 1 flavor case have been considered in Ref. [94][95][96], and the quark-mass phase diagram that is consistent with the standard scenario can be reproduced. The case of finite chemical potential has also been investigated [97][98][99], and the chiral phase diagram containing a critical end point can be obtained from the improved soft-wall AdS/QCD model with 2 + 1 flavors [98,99]. It should be noted that the AdS-Schwarzschild black-hole background has been used in most of studies of chiral transition at zero chemical potentials. However, such an AdS black-hole solution is dual to a conformal gauge theory, which cannot generate the QCD equation of state without breaking the conformal invariance [73]. As just mentioned, one can resort to the Einstein-dilaton system with a nontrivial dilaton profile to rescue this issue. So we wonder whether the correct chiral transition behaviors can still be obtained from a soft-wall AdS/QCD model with a solved gravitational background from the Einstein-dilaton system. In this work, we shall consider this issue and try to combine the description of chiral transition with that of the equation of state which signifies deconfinement in a unified holographic framework. This paper is organized as follows. In Sec. II, we consider an Einstein-dilaton system with a nontrivial dilaton potential, which can produce the equation of state consistent with lattice results in the two-flavor case. The vacuum expectation value (VEV) of the Polyakov loop will also be computed in such a background system, and will be compared with the lattice data. In Sec. III, we propose an improved soft-wall AdS/QCD model with a cubic coupling term of the dilaton and the bulk scalar field, and the chiral transition behaviors will be considered in the two-flavor case. It will be seen that the crossover behaviors of chiral transition can be realized in this model. The parameter dependence of chiral transition will also be investigated. In Sec. IV, we give a brief summary of our work and conclude with some remarks. II. QCD EQUATION OF STATE FROM HOLOGRAPHY A. The Einstein-dilaton system In the previous works, we proposed an improved soft-wall AdS/QCD model with a running bulk scalar mass m 2 5 (z), which gives quite a good characterization for the chiral transition in both the two-flavor and the 2 + 1 flavor case [93,96]. However, the AdS-Schwarzschild black hole presumed in this model cannot describe the thermodynamical behaviors of QCD equation of state and other equilibrium quantities which show obvious violation of conformal invariance [73]. In order to acquire these basic features of thermal QCD, we need to construct a proper gravity background other than the AdS-type black hole to break the conformal invariance of the dual gauge theory. The minimal action of such a background system is given in the string frame as where κ 2 5 = 8πG 5 , and a dilaton field φ has been introduced to produce relevant deformations of the dual conformal field theory. The dilaton φ(z) is assumed to depend only on the radial coordinate z. The key point of this model is to find a particular form of the dilaton potential V (φ) with necessary ingredients to describe the QCD thermodynamics, such as the equation of state. The metric of the bulk geometry in the string frame can be written as with the asymptotic structure of AdS 5 spacetime at z → 0 to guarantee the UV conformal behavior of the dual gauge theory on the boundary. We take the AdS radius L = 1 for convenience. To simplify the calculation, we will work in the Einstein frame with the metric ansatz The warp factors in the two frames are related by A S = A E + 2 3 φ, in terms of which the background action in the Einstein frame can be obtained from the string-frame action (1) as with V E (φ) ≡ e 4φ 3 V (φ) (the subscript E denotes the Einstein frame). B. The EOM with a nontrivial dilaton potential The independent Einstein equations can be derived by the variation of the action (4) with respect to the metric g M N , The equation of motion (EOM) of the dilaton φ in the Einstein frame can also be derived as Given the dilaton potential V E (φ), the numerical solution of the background fields A E , f and φ can be solved from the coupled differential equations (5), (6) and (7). Although there are few constraints on the form of the dilaton potential from the topdown approach of AdS/QCD, it has been shown that a proper V E (φ) can be constructed from bottom up to describe the equation of state of the strongly coupled QGP [73,74]. Near the boundary, the bulk geometry should approach the AdS 5 spacetime that corresponds to a UV fixed point of the dual gauge theory. This requires that the dilaton potential at UV has the following asymptotic form: with the rescaled dilaton field defined by φ c = 8 3 φ, in terms of which the action (4) can be recast into the canonical form As argued in Ref. [74], the dilaton potential at IR takes an exponential form V c (φ c ) ∼ V 0 e γφc with V 0 < 0 and γ > 0 in order to yield the Chamblin-Reall solution, whose adiabatic generalization links the QCD equation of state to the specific form of V c (φ c ). According to AdS/CFT, the mass squared of φ c is related to the scaling dimension ∆ of the dual operator on the boundary by m 2 L 2 = ∆(∆ − 4) [16]. We only consider the case of 2 < ∆ < 4, which corresponds to the relevant deformations satisfying the Breitenlohner-Freedman (BF) bound [73,77]. It is usually assumed that the dilaton field φ c is dual to the gauge-invariant dimension-4 glueball operator tr F 2 µν , yet other possibilities such as a dimension-2 gluon mass operator have also been considered [32]. Following Ref. [73], we try to match QCD at some intermediate semi-hard scale, where the scaling dimension of tr F 2 µν would have a smaller value than 4. One remark is that the asymptotic freedom cannot be captured in this way, but will be replaced by conformal invariance when above the semi-hard scale [73]. The full consideration might go beyond the supergravity approximation. In this work, we just take ∆ = 3, which has been shown to be able to describe the equation of state from lattice QCD with 2 + 1 flavors [78,79], and which is also easier to implement in the numerical calculation. We note that other values of ∆ can also be used to mimick the QCD equation of state, and this is by and large determined by the particular form of the dilaton potential and the specific parameter values [73,77]. The main aim of our work is to investigate the chiral properties based on a gravitational background which can reproduce the QCD equation of state. Thus we will not delve into the possible influence of the value of ∆ on the results considered in our work. Following the studies in Ref. [73], we choose a relatively simple dilaton potential which satisfies the required UV and IR asymptotics, where γ and b 2 are constrained by the UV asymptotic form (8) as The dilaton potential V E (φ) has the form We will see that the Einstein-dilaton system given above can also be used to mimick the two-flavor lattice results of the QCD equation of state, whereby the dilaton potential V E (φ) and the background geometry can be reconstructed for the two-flavor case. C. Equation of state Now we come to the equation of state in the Einstein-dilaton system with the given form of dilaton potential (10). First note that the background geometry has an event horizon at z = z h which is determined by f (z h ) = 0. In terms of the metric ansatz (3), the Hawking temperature T of the black hole is given by and the entropy density s is related to the area of the horizon, Thus we can compute the speed of sound c s by the formula Moreover, the pressure p can be obtained from the thermodynamic relation s = ∂p ∂T as The energy density ε = −p+sT and the trace anomaly ε−3p can also be computed. Then we can study the temperature dependence of the equation of state in such an Einstein-dilaton system. As we constrain ourselves to the two-flavor case, the equation of state from lattice QCD with two flavors will be used to construct the dilaton potential V E (φ) [100]. Instead of implementing the numerical procedure elucidated in Ref. [74], we will directly solve the background fields from Eqs. (5), (6) and (7). To simplify the computation, note that Eq. (5) can be integrated into a first-order differential equation where f c is the integral constant. In view of ∆ = 3, the UV asymptotic forms of the background fields at z → 0 can be obtained as with three independent parameters p 1 , p 3 and f c . As we have f (z h ) = 0, to guarantee the regular behavior of φ(z) near the horizon, Eq. (7) must satisfy a natural IR boundary condition at z = z h , With the UV asymptotic form (18) and the IR boundary condition (19), the background fields f , A E and φ can be solved numerically from Eqs. (6), (7) and (17). We find that the dilaton potential (10) with γ = 0.55, b 2 = 0.315 and b 4 = −0.125 can well reproduce the two-flavor lattice QCD results of the equation of state. Note that γ and b 2 are related by the formula (11). The parameter p 1 = 0.675 GeV is also fitted by the lattice results, and the 5D Newton constant is just taken as G 5 = 1 in our consideration. The parameters p 3 and f c in (18) are constrained by the IR boundary condition (19), and thus depend on the horizon z h or the temperature T . We show the z h -dependence of the parameter f c and the temperature T in Fig. 1. Both f c and T decrease monotonically towards zero with the increase of z h , which implies that our black hole solution persists in all the range of T . We also show the parameter p 3 as a function of T in Fig. 2, where we can see that p 3 varies very slowly in the range of T = 0 ∼ 0.2 GeV, then decreases and goes to negative values at around T 0.28 GeV. The temperature dependences of the entropy density s/T 3 and the speed of sound squared c 2 s are shown in Fig. 3, while in Fig. 4 we compare the numerical results of the pressure 3p/T 4 and the energy density ε/T 4 in unites of T 4 with the lattice interpolation results for the B-mass ensemble considered in Ref. [100]. In Fig. 5, we present the model result of the trace anomaly (ε − 3p)/T 4 , which is also compared with the lattice interpolation result. We can see that the Einstein-dilaton system with a nontrivial dilaton potential can generate the crossover behavior of the equation of state which matches well with the lattice results. D. Polyakov loop The deconfining phase transition in thermal QCD is characterized by the VEV of the Polyakov loop which is defined as where 0 is the time component of the non-Abelian gauge field operator, the symbol P denotes path ordering and the trace is over the fundamental representation of SU (N c ). The VEV of the Polyakov loop in AdS/CFT is schematically given by the world-sheet path integral where X is a set of world-sheet fields and S w is the classical world-sheet action [76,77]. In principle, L can be evaluated approximately in terms of the minimal surface of the string world-sheet with given boundary conditions. In the low-energy and large N c limit, we have L ∼ e −S N G with the Nambu-Goto action where α denotes the string tension, g S µν is the string-frame metric and X µ = X µ (τ, σ) is the embedding of the world-sheet in the bulk spacetime. The regularized minimal world-sheet area takes the form with g p = L 2 2α [76]. Subtracting the UV divergent terms and letting → 0, the renormalized world-sheet area can be obtained as where c p is a scheme-dependent normalization constant.Thus the VEV of the Polyakov loop can be written as where w is a weight coefficient and the normalization constant c p = ln w − c p . We plot the temperature-dependent behavior of L with the parameter values g p = 0.29 and c p = 0.16 in Fig. 6, where we also show the two-flavor lattice data of the renormalized Polyakov loop (corresponding to the B-mass ensemble in [100]). We can see that the model result fits the lattice data quite well when we choose proper values of g p and c p . E. On the stability of the black hole solution One remark should be given on the background solution of the Einstein-dilaton system. In the above description of the equation of state, we have only used the black hole solution which is asymptotic to AdS 5 near the boundary, and have seen that this is crucial for the realization of the crossover transition. However, in principle, the Einstein-dilaton system also admits a thermal gas solution, which can be obtained by setting f (z) = 1 [49,72]. To guarantee the soundness of our calculation, we shall check the stability of the black hole solution against the thermal gas one. According to AdS/CFT, the free energy is related to the on-shell action of the background fields by βF = S R with β = 1/T , and the regularized on-shell action consists of three parts: where S E denotes the on-shell Einstein-Hilbert action, S GH denotes the Gibbons-Hawking term and S count denotes the counter term. The subscripts and IR denote the contributions at UV cut-off z = and IR cut-off z = z IR respectively. Following Ref. [49], we can obtain the regularized on-shell action of the black hole (BH) solution: where M 3 ≡ 1/(16πG 5 ), V 3 is the three-space volume and b(z) ≡ L z e A E (z) . Note that S BH has no IR contribution due to f (z h ) = 0. The regularized on-shell action of the thermal gas (TG) solution takes the form: where b 0 (z) ≡ L z e A E0 (z) andβ,Ṽ 3 denote the corresponding quantities in the thermal gas case. To compare the free energies, we must make sure that the intrinsic geometries near the boundary should be the same for the two background solutions, i.e., the proper length of time circle and the proper volume of three-space should be the same at z = , which imposes the following conditions [49]: With the condition (29), the free energy difference between the two background solutions has the form: where the IR contribution in (28) has been omitted since this term vanishes for good singularities [49]. In terms of the UV asymptotic forms (18), we obtain the following result: where we have taken the limit → 0. Note that the UV divergent terms in S BH and S T G have the same form, thus cancel in the final result. Since f c > 0, we have F < 0, which implies that the black hole phase is more stable than the thermal gas phase. III. CHIRAL TRANSITION IN AN IMPROVED SOFT-WALL MODEL WITH SOLVED BACKGROUND Our previous studies have shown that the chiral transition at zero baryon chemical potential can be characterized by an improved soft-wall AdS/QCD model in the AdS-Schwarzschild black hole background [93,96]. However, this black hole solution cannot describe the QCD equation of state due to the conformal invariance of the dual gauge theory. Our main aim of this work is to combine the advantages of the improved soft-wall model in the description of chiral transition with a background system which can reproduce the deconfinement properties of QCD. As a first attempt, we investigate the possible ways to produce the chiral transition behaviors in the two-flavor case based on an improved soft-wall model (as the flavor part) under the more realistic background solved from the above Einstein-dilaton system. A. The flavor action We first outline the improved soft-wall AdS/QCD model with two flavors which is proposed in Ref. [93]. The bulk action relevant to the chiral transition in this model is the scalar sector, where the dilaton takes the form Φ(z) = µ 2 g z 2 to produce the linear Regge spectra of light mesons, and the scalar potential is with a running bulk mass m 2 5 (z) = −3 − µ 2 c z 2 . The constant term of m 2 5 (z) is determined by the mass-dimension relation m 2 5 L 2 = ∆(∆ − 4) for a bulk scalar field [15,16], while the z-squared term is motivated by the phenomenology of meson spectrum and the quark mass anomalous dimension [93]. In the holographic framework, a natural mechanism to produce such a z-dependent term of m 2 5 (z) is to introduce a coupling between the dilaton and the bulk scalar field. As we can see, without changing the results of the improved soft-wall model, the scalar potential can be recast into another form where m 2 5 = −3, and a cubic coupling term of Φ and X has been introduced. The effects of similar couplings on the low-energy hadron properties have also been considered in the previous studies [32]. Here we propose such a change of V X from (33) to (34) with the aim to describe the chiral transition behaviors for the two-flavor case. Thus the flavor action that will be addressed in this work is Unlike the previous studies, the metric and the dilaton in the flavor action (35) will be solved from the Einstein-dilaton system (6), (7) and (17), which has been shown to be able to reproduce the two-flavor lattice results of the equation of state. We shall assume that the flavor action (35) has been written in the string frame with the metric ansatz (2). In our model, the dilaton field φ in the background action (1) has been distinguished from the field Φ in the flavor action (35). From AdS/CFT, these two fields may be reasonably identified as the same one, as indicated by the Dirac-Born-Infeld action which dictates the dynamics of the open string sector with the string coupling g s = e φ . This is implemented in some works [32]. However, the low-energy and large-N limits taken in AdS/CFT and the further reduction to AdS/QCD has made things more subtle. The exact correspondence between g s and e φ is not consolidated in the bottom-up AdS/QCD. On the other hand, the dilaton term e −Φ in the flavor sector has been introduced to realize the Regge spectra of hadrons [18]. In this work, we concentrate on the phenomenological aspects and study how to realize more low-energy properties of QCD by the holographic approach. Thus we try a more general form Φ = kφ with k a parameter, which will not affect the linear Regge spectra qualitatively. In the actual calculation, we just choose two simplest cases k = 1 and k = 2 to investigate the effects of k on chiral behaviors. The probe approximation which neglects the backreaction effect of the flavor sector on the background system will be adopted in this work, as in the most studies on AdS/QCD with fixed background. B. The EOM of the scalar VEV According to AdS/CFT, the VEV of the bulk scalar field in the two-flavor case can be written as X = χ(z) 2 I 2 with I 2 denoting the 2 × 2 identity matrix, and the chiral condensate is incorporated in the UV expansion of the scalar VEV χ(z) [16]. To address the issue on chiral transition, we only need to consider the action of the scalar VEV, with In terms of the metric ansatz (2), the EOM of χ(z) can be derived from the action (36) as The UV asymptotic form of χ(z) at z → 0 can be obtained from Eq. (38) as where m q is the current quark mass, σ is the chiral condensate, and ζ = √ Nc 2π is a normalization constant [20]. As in Eq. (7), a natural boundary condition at horizon z h follows from the regular condition of χ(z) near z h , C. Chiral transition To study the chiral transition properties in the improved soft-wall AdS/QCD model with the given background, we need to solve the scalar VEV χ(z) numerically from Eq. (38) with the UV asymptotic form (39) and the IR boundary condition (40). The chiral condensate can then be extracted from the UV expansion of χ(z). In the calculation, we will take the set of parameter values which has been used to fit the lattice results of the equation of state in the two-flavor case (see Sec. II C), and the quark mass will be fixed as m q = 5 MeV. In this work, we only consider two cases corresponding to k = 1 (Φ = φ) and k = 2 (Φ = 2φ). In each case, the temperature dependence of the chiral condensate normalized by σ 0 = σ(T = 0) will be investigated for a set of values of λ 1 and λ 2 . We first fix λ 2 = 1, and select four different values of λ 1 for each case. The model results of the normalized chiral condensate σ/σ 0 as a function of T are shown in Fig. 7. We can see that the crossover transition can be realized qualitatively in such an improved soft-wall model with the solved gravitational background. Moreover, we find that there is a decreasing tendency for the transition temperature with the decrease of λ 1 , yet a visible bump emerges near the transition region at relatively smaller λ 1 and only disappears gradually with the increase of λ 1 . As shown in Fig. 7, we find that the transition temperatures with our selected parameter values are larger than the lattice result T χ ∼ 193 MeV [100]. We then investigate the effects of the quartic coupling constant λ 2 on chiral transitions. We fix λ 1 = −1.4 for the case k = 1 and λ 1 = −0.5 for the case k = 2, and select four different values of λ 2 in each case. The chiral transition curves are plotted in Fig. 8. The result shows that with the increase of λ 2 the bump near the transition region becomes smaller and the normalized chiral condensate σ/σ 0 descends gently with T , though the value of λ 2 needs to be very large in order to smooth out the bump. IV. CONCLUSION AND DISCUSSION We considered an improved soft-wall AdS/QCD model with a cubic coupling term between the dilaton and the bulk scalar field in a more realistic background, which is solved from the Einstein-dilaton system with a nontrivial dilaton potential. Such an Einsteindilaton system has been used to reproduce the equation of state from lattice QCD with two flavors. Then the chiral transition behaviors were investigated in the improved soft-wall model based on the solved bulk background. We only considered two typical cases with k = 1 and k = 2, and the quartic coupling constant is firstly fixed as λ 2 = 1. In both cases, the crossover behavior of chiral transition can be realized, as seen from Fig. 7. Nevertheless, the chiral transition temperatures obtained from the model are much larger than the lattice result. Although T χ decreases with the decrease of λ 1 , a visible bump near the transition region will emerge when λ 1 is small enough. We then studied the influence of the value of λ 2 on chiral transition, as shown in Fig. 8. We find that in some sense the quartic coupling term can smooth the bump, but to remove it, the value of λ 2 needs to be very large. In our consideration, the scaling dimension of the dual operator tr F 2 µν of the dilaton has been taken as ∆ = 3, which can be used to mimick the QCD equation of state [78,79]. However, we remark that the properties of thermal QCD considered in our work are not determined exclusively by one particular value of ∆. Indeed, other values of ∆ have also been adopted for the realization of equation of state with slightly different forms of V E (φ) [73,77]. Since the UV matching to QCD at a finite scale cannot capture asymptotic freedom, we are content to give a phenomenological description on the thermodynamic properties of QCD, which are expected not so sensitive to the UV regime from the angle of renormalization and effective field theory. In this work, we have built an improved soft-wall AdS/QCD model under a more realistic gravitational background, which provides a possibility in the holographic framework to address the deconfining and chiral phase transition simultaneously. We have assumed that the backreaction of the flavor sector to the background is small enough such that we can just adopt the solution of the Einstein-dilaton system as the bulk background under which the chiral properties of the improved soft-wall model are considered. This is sensible only when we take a small weight of the flavor action (35) compared to the background action (1). To clarify the phase structure in this improved soft-wall AdS/QCD model, we shall consider the backreaction of the flavor part to the background system thoroughly. The correlation between the deconfining and chiral phase transitions can then be studied in such an improved soft-wall model coupled with an Einsteindilaton system. The case of finite chemical potential can also be considered by introducing a U (1) gauge field in order to study the properties of QCD phase diagram.
v3-fos-license
2020-10-19T18:10:40.980Z
2020-09-01T00:00:00.000
224870648
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.cureus.com/articles/26222-stereotactic-radiotherapy-for-pancreatic-cancer-a-single-institution-experience.pdf", "pdf_hash": "5612281a20ba7846064aae141434024776876ba1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:49", "s2fieldsofstudy": [ "Medicine" ], "sha1": "14ffda595c39f03c6235c11243826d2e94da349e", "year": 2020 }
pes2o/s2orc
Stereotactic Radiotherapy for Pancreatic Cancer: A Single-Institution Experience Introduction Despite treatment advances, the prognosis of locally advanced pancreatic cancer is poor. Treatment remains varied and includes systemic and radiotherapy (RT). Stereotactic body radiotherapy (SBRT), highly conformal high-dose RT per fraction, is an emerging treatment option. Materials and methods We performed a single-institution retrospective review of patients with pancreatic adenocarcinoma treated with SBRT from 2015-2017. The median dose was 27 Gy (range: 21-36 Gy) in three fractions. Endpoints included local progression (RECIST 1.1; Response Evaluation Criteria in Solid Tumors 1.1), distant metastasis, overall survival, and toxicity. Results Forty-one patients were treated, with a median follow-up of eight months. Patients who received SBRT had unresectable (49%), metastatic (17%), or borderline resectable (7%) disease, declined surgery (17%), medically inoperable (7%), or developed local recurrence following the Whipple procedure (2%). The six-month and one-year rates of local progression-free survival, distant metastasis-free survival, and overall survival were 62% and 55%, 44% and 32%, and 70% and 49%, respectively. Five patients (12%) experienced seven late gastrointestinal (GI) grade 3 events. Conclusion SBRT may be considered a treatment option to achieve local control of pancreatic cancer and is associated with a modest risk of severe late GI toxicities. Systemic therapies remain important, given the proportion of patients who develop distant metastases. Introduction Pancreatic cancer is the third leading cause of cancer-related death in the United States, with a five-year overall survival of approximately 8% [1][2]. While surgery is the only potentially curative option, many patients are not eligible either because the disease is locally advanced or the patient is medically inoperable. In these cases, the prognosis of clinically localized or locally advanced pancreas cancer (LAPC) remains poor. Treatment for unresectable LAPC remains controversial and includes both systemic therapy and radiotherapy [3]. The LAP07 trial reported an improvement in local control for the chemoradiotherapy (CRT) arm as compared to the chemotherapy alone arm; however, there was no difference in overall survival [4]. Although continuous improvements in systemic therapy is allowing for prolonged survival, local failure remains a clinical problem leading to morbidity and death [5]. Given improvements in overall survival, the need for improved local control is becoming increasingly important. The role of stereotactic body radiotherapy (SBRT) is being investigated as an alternative to long-course chemoradiation in select patients. Advantages of SBRT include improved patient tolerability, shorter treatment time, and, therefore, less time off systemic therapy, as well as safely delivered highly conformal dose-escalation designed to maximize local control [6][7][8][9][10][11][12][13][14][15]. The purpose of this study is to report our early institutional real-world outcomes of local control, distant metastasis, overall survival, and toxicity in patients receiving pancreatic SBRT. Patient population This was a retrospective analysis of consecutive patients treated with pancreatic SBRT at the Odette Cancer Centre, Sunnybrook Health Sciences Centre, Canada, from May 2015 to December 2017. Institutional research ethics board approval was obtained from Sunnybrook Health Sciences Centre (171-2018). Patients were required to have either biopsy-proven pancreatic adenocarcinoma, evidence of a pancreatic mass on imaging associated with significant fludeoxyglucose (FDG) avidity on positron emission tomographycomputed tomography (PET-CT) or a pancreatic mass with CT imaging characteristics consistent with adenocarcinoma and a markedly elevated CA-19-9 (>500). All patients considered for SBRT were evaluated by a hepatobiliary surgeon and deemed unresectable or medically inoperable. All relevant diagnostic scans were centrally reviewed by a radiologist to determine the extent of the disease. Patients with tumors 6 cm or smaller, with Karnofsky Performance status ≥60, were eligible for SBRT. Patients with metastatic disease were eligible for SBRT in settings where local control was deemed important and was reviewed on a case-by-case basis with multi-disciplinary input. The proximity of the tumor to gastrointestinal organs did not preclude treatment with SBRT but did impact the prescription dose, as described below. Patients could not receive chemotherapy within two weeks before or after SBRT. The laboratory values required to proceed with SBRT included neutrophils ≥1,500 cells/mm 3 , platelets ≥80,000 cells/mm 3 , and total bilirubin, aspartate aminotransferase (AST), alanine transaminase (ALT), and alkaline phosphatase each less than three times the institutional limit. Patients were assessed in follow-up on a three-monthly basis, with history, physical examination, bloodwork, and CT imaging, or sooner as clinically indicated. SBRT planning details Patients were required to have a biliary stent in situ or fiducial markers implanted prior to simulation to facilitate image guidance. Stents were the primary form of surrogate initially, as we did not have institutional access to fiducial markers. The stents were placed directly adjacent to the tumors. A consistent gastric filling protocol was required, and all patients were given oral and intravenous (IV) contrast prior to simulation. Patients were immobilized using an abdominal compression plate to minimize motion due to respiration and a contrast-enhanced four-dimensional computed tomography (4DCT) scan was acquired. Target volumes were contoured on the average, maximal inhale, and maximal exhale 4DCT data sets. The gross tumor volume (GTV) was the macroscopic tumor as defined on imaging and endoscopy. No additional clinical target volume (CTV) was defined. An internal target volume (ITV) was created by combining all GTV contours from each of the datasets. A uniform 0.5 cm expansion around the ITV was added to create the planning target volume (PTV), as per the institutional abdominal SBRT protocol ( Figure 1). Mandatory contoured organs at risk (OARs) included the uninvolved pancreas, stomach, duodenum, small bowel, liver, kidneys, spinal canal, heart, and skin. Treatment plans were generated in the Pinnacle treatment planning system (Philips Medical Systems, Madison, WI) based on the average 4DCT image sequence, using volumetric modulated arc therapy (VMAT) with one full arc and additional partial arcs if necessary to achieve adequate target coverage and dose constraints ( Figure 2). Dose-volume constraints to organs at risk are detailed in Table 1. Dose/fractionation ranged from 21-36 Gy in three fractions, with the aim to treat to as high a dose as possible within this range as determined by dose constraints to OARs (Table 1), with treatment delivered every other day over one week. PTV coverage was V95% ≥99% and V110% <1%. Co-registration of cone-beam CT (CBCT) was performed at each treatment with initial alignment based on a soft tissue match, with fine-tuning based on either fiducial markers or stent as a surrogate [16][17]. SBRT details are outlined in Table 2. Outcome criteria Local response to SBRT was assessed using RECIST (Response Evaluation Criteria in Solid Tumors) version 1.1 criteria [18]. Progressive disease (PD) was defined, as at least a 20% increase in the sum of diameters of target lesions, taking as reference the smallest sum on study, with the sum also demonstrating an absolute increase of at least 5 mm. Toxicity was graded according to the National Cancer Institute Common Terminology Criteria for Adverse Events (NCI CTCAE) version 4.0 scale [19]. Response and toxicity were recorded prospectively in patient records during routine follow-up visits and were collected retrospectively for this study. Statistical analysis Kaplan-Meier curves were generated to calculate local progression-free survival, metastasis-free survival, and overall survival, calculated from the start date of SBRT. Patients who were still alive or who had not experienced the event of interest by October 1, 2018, were censored. Statistical routines were performed using SPSS (SPSS Statistics, Version 21.0, Armonk, NY: IBM Corporation). Results Forty-one patients were treated at our center with pancreatic SBRT ( Table 3). Median follow-up from the start date of SBRT was eight months (range: 0-21 months) for the entire cohort and 12 months (range: three to 21 months) for patients who are alive. Local control and survival outcomes At the time of the last follow-up, 13 patients (31.7%) developed local progression, at a median time of three months (range: 1-12 months) following SBRT. Of these 13 patients, nine developed metastases (at the same time as local progression in eight of these patients and subsequent to local progression in one patient), whereas two had metastatic disease at initial presentation. In total, 23 patients (56%) had metastatic disease at last follow-up, including seven patients who had synchronous metastases at diagnosis. Twenty-six patients (63%) had died at the time of the last follow-up. Toxicities Acute toxicities occurred in 14 patients (34.1%) of which nine experienced grade one abdominal pain, four experienced grade one fatigue, and one both. Late toxicities were observed in five patients (12.2%) with a total of seven grade three gastrointestinal (GI) events, which may be treatment-related. These events include duodenal stenosis (two events), a fistula between the cancer and the duodenum (two events), portal vein stenosis (one event), gastric outlet obstruction (one event), and duodenal ulcer resulting in hemorrhage (one event). No patients experienced grade four-five late GI toxicity. Given the difficulty in ascertaining whether an event in the location of the pancreas is due to tumor progression or treatment-related toxicity, if we exclude patients with local progression from measures of late toxicity, two patients (4.9%) experienced a total of three events, including duodenal ulcer resulting in hemorrhage (one event), mass-duodenal fistula (one event), and gastric outlet obstruction (one event). Surgical details One patient underwent a Whipple procedure 31 months prior to SBRT, with pathology significant for an approximate 4 cm margin-negative grade two pancreatic adenocarcinoma pT3N0, with 12 lymph nodes dissected. This patient also completed six months of adjuvant gemcitabine. The patient subsequently developed locally recurrent disease and underwent SBRT. In addition, two patients who underwent SBRT for borderline resectable disease were able to proceed with the Whipple procedure. The first patient had surgery performed one and a half months following the completion of SBRT, with pathology revealing a 2.8 cm margin-negative grade two ductal adenocarcinoma, pT3N0, with 22 lymph nodes removed. The second patient underwent both the Whipple procedure and NanoKnife surgery, with pathologic assessment of the specimen revealing no evidence of residual disease. Discussion The treatment of unresectable LAPC remains controversial. SBRT is increasingly used in these patients; however, its exact role and the optimal sequencing in the context of other treatments are still unknown. Until the results of the currently ongoing phase III Pancreatic Cancer Radiotherapy Study Group Trial evaluating patients with LAPC treated with modified FOLFIRNOX with or without SBRT (ClinicalTrials.gov NCT01926197) are published, treatment will be based on clinical experience derived from single institutional series. In this report, we highlight the outcomes of real-world patients treated with SBRT at our institution, which can add to the available body of evidence. Local control reported in this study was 62% at six months and 55% at one year. Compared to other series, this is a relatively low rate of local control. In a systematic review of 13 studies and 889 patients treated with pancreatic SBRT for patients with locally advanced disease, the one-year local control rate was 72% [15]. This review found that the total dose delivered and a higher number of fractions were significantly associated with one-year locoregional control [15]. Our study employed a relatively low total dose (median 27 Gy) with [21]. The role of dose-escalation in LAPC has been studied in the context of fractionated intensity-modulated RT (IMRT) treatment [22]. Of 200 patients with LAPC treated with induction chemotherapy followed by chemoradiation (50.4 Gy in 28 fractions; BED10 of 59.5 Gy), a subset of 47 patients was eligible for doseescalation with BED10 >70 Gy [22]. Patients who received dose escalation had significantly higher overall survival (17.8 versus 15 months) and improved local-regional recurrence-free survival (10.2 versus 6.2 months) [22]. We initially employed a three fraction treatment regimen based on the available evidence at the time we began the pancreas SBRT program, and to minimize treatment visits for patients, given the palliative nature of treatment [8,23]. However, based on our local control results, and the results of other series above, our revised institutional policy is to escalate the total radiation dose as allowable based on dose-volume constraints to organs at risk and to consider a five fraction treatment regimen (to a dose as high as allowable between 30 and 50 Gy; BED10 of 48-100 Gy) as a means to try to improve local control and reduce late GI toxicity. Late grade three GI toxicity was observed in 12% of patients, which is similar to two other published studies [24][25]. One previous analysis has shown that the radiotherapy dose to the duodenum was significantly related to toxicity [26]. In our study, we used two dose-volume constraints each for the duodenum and stomach ( Table 1). In one patient (2%), we exceeded the institutional duodenal V30Gy <0.5 cc dose constraint and the patient received V30Gy 0.52 cc. However, in the remainder of patients, we were able to meet this dose constraint by reducing the total prescription dose. There was more difficulty in achieving the V16.5Gy <5 cc dose constraint, as recommended by the American Association of Physicists in Medicine (AAPM) Task Group 101 and consensus guidelines, with a median V16.5Gy of 9.08 cc [27][28]. This constraint was exceeded in 34 patients (83%) based on the very close proximity of the pancreatic lesion to the duodenum. All cases were presented and reviewed at quality assurance rounds prior to treatment delivery whereby a discussion regarding the risks of decreasing the total prescription dose to achieve this dose constraint versus the risk of decreasing local control with a lower total prescription dose occurred. Ultimately, further work regarding optimal dose-volume constraint metrics and their relationship with late toxicity outcomes is necessary. In addition, further follow-up in our study cohort is needed to ascertain the true rates of late toxicity. This study has some limitations. First, the patient sample is small, and there is significant heterogeneity in the patients included in terms of diagnostic stage, reasons for receiving SBRT, the dose of SBRT received, and the use of chemotherapy. However, the heterogeneity is reflective of patients in real-world settings who received SBRT at a large, tertiary cancer center, and the study included all patients treated with SBRT at our institution from May 2015 to December 2017. In addition, the heterogeneity is reflective of current guidelines with no clear consensus on treatment for this group of patients [3]. Further research is needed to guide patient selection for various treatments in this population. Second, the median follow-up time is short and does not allow for an analysis of fully mature results, including fully mature late toxicity results. Third, the use of the RECIST v1.1 definition for local progression is not a perfect endpoint. Throughout other published studies reporting on pancreas SBRT, there are a variety of endpoints used for local progression assessment, with a lack of consensus. In particular, the RECIST definition can be problematic given the potential increase in tumor size in the approximately three-month period following pancreas SBRT secondary to tumoral edema. This may have led to an incorrect classification of local progression in patients who did not truly experience local progression (i.e. false positive). Conclusions Our results suggest that SBRT may be considered a treatment option in patients where local control of their pancreas tumor is important as part of their comprehensive multi-modality management. Based on our results, the use of a multi-fraction regimen and high total dose as allowable based on dose-volume constraints to organs at risk should be considered to minimize toxicity and improve local control outcomes. Further research is needed to improve outcomes for these patients. Additional Information Disclosures Human subjects: Consent was obtained by all participants in this study. Sunnybrook Health Sciences Centre issued approval 171-2018. Institutional research ethics board approval was obtained from Sunnybrook Health Sciences Centre (171-2018). Informed consent from individual patients included in this review was waived. Animal subjects: All authors have confirmed that this study did not involve animal subjects or tissue. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
v3-fos-license
2024-06-27T06:16:08.912Z
2024-06-26T00:00:00.000
270736200
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10096-024-04883-y.pdf", "pdf_hash": "bf19c7168ab5639f3d26729175bfdec6e72005e5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:51", "s2fieldsofstudy": [ "Medicine" ], "sha1": "078d16de723d801bd7ecb44cd8cb453e1ee749df", "year": 2024 }
pes2o/s2orc
Microbiological risk factors, ICU survival, and 1-year survival in hematological patients with pneumonia requiring invasive mechanical ventilation Purpose To identify pathogenic microorganisms and microbiological risk factors causing high morbidity and mortality in immunocompromised patients requiring invasive mechanical ventilation due to pneumonia. Methods A retrospective single-center study was performed at the intensive care unit (ICU) of the Department of Internal Medicine at Heidelberg University Hospital (Germany) including 246 consecutive patients with hematological malignancies requiring invasive mechanical ventilation due to pneumonia from 08/2004 to 07/2016. Microbiological and radiological data were collected and statistically analyzed for risk factors for ICU and 1-year mortality. Results ICU and 1-year mortality were 63.0% (155/246) and 81.0% (196/242), respectively. Pneumonia causing pathogens were identified in 143 (58.1%) patients, multimicrobial infections were present in 51 (20.7%) patients. Fungal, bacterial and viral pathogens were detected in 89 (36.2%), 55 (22.4%) and 41 (16.7%) patients, respectively. Human herpesviruses were concomitantly reactivated in 85 (34.6%) patients. As significant microbiological risk factors for ICU mortality probable invasive Aspergillus disease with positive serum-Galactomannan (odds ratio 3.1 (1.2-8.0), p = 0.021,) and pulmonary Cytomegalovirus reactivation at intubation (odds ratio 5.3 (1.1–26.8), p = 0.043,) were identified. 1-year mortality was not significantly associated with type of infection. Of interest, 19 patients had infections with various respiratory viruses and Aspergillus spp. superinfections and experienced high ICU and 1-year mortality of 78.9% (15/19) and 89.5% (17/19), respectively. Conclusions Patients with hematological malignancies requiring invasive mechanical ventilation due to pneumonia showed high ICU and 1-year mortality. Pulmonary Aspergillosis and pulmonary reactivation of Cytomegalovirus at intubation were significantly associated with negative outcome. Introduction Hematological malignancies (HM) and cancer therapies cause diverse immune defects.Thus, these patients are at increased and multi-etiological risk of severe infectious complications.Up to 80% of patients with leukemia, lymphoma and multiple myeloma experience infectious complications during disease and treatment [1].Development of pneumonia with acute respiratory failure (ARF) and need for invasive mechanical ventilation (MV) is associated with high mortality up to 70% [2][3][4]. Different hematological risk factors for mortality are known.These include hematopoietic stem cell transplantation (HSCT), graft versus host disease (gvhd) and neutropenia [5][6][7].Most lower respiratory tract (LRT) infections in patients with HM are due to bacteria, often with multidrug resistance [8].Besides community-acquired pneumonia, commonly caused by gram-positive bacteria, nosocomial infections with gram-negative germs play a crucial role in hematological patients [2]. Viral respiratory infections usually occur communityacquired and seasonally [9].Severity ranges from common cold to severe LRT infection.In recent years the COVID-19 pandemic impressively showed the potential threat of viral pneumonia for both cancer and non-cancer patients.In patients with HM about 30% of infections with communityacquired respiratory viruses (CRV) progress to LRT infections with an associated mortality of up to 25% [8].In case of fungal or bacterial co-infections mortality associated with viral pneumonia can be substantially higher [9]. Facultative pathogenic fungi are omnipresent, but invasive fungal infections are predominantly seen in immunocompromised patients [10].Invasive fungal disease can be found in about 30% of patients with HM post-mortem [11].Mostly, first site of invasive fungal infection is the lung because of the aerogenic transmission of the spores.The most common fungal pathogen detectable in LRT fluids is Aspergillus spp. with associated mortality rates ranging from 55 to 78% [12,13]. However, in 45-50% of hematological patients with clinical diagnosis of pneumonia, no pathogenic germs are found even though invasive diagnostic testing (e.g., broncho-alveolar lavage (BAL)) is performed [12,20].In these scenarios, radiological diagnostics, particularly computed tomography (CT) scans, are essential for verifying the site of infection and can be helpful in revealing characteristic findings indicative of specific pathogens.Even in neutropenic patients, who are usually the most challenging to diagnose due to subtle expression of clinical symptoms and radiological signs, pneumonia-specific infiltrates or even findings indicating specific pathogens have been found with a high sensitivity of up to 87% [21].Thus, CT scans constitute a valuable complementary method to microbiological testing in identifying causative agents of pneumonia in immunocompromised patients [22]. This study was performed to help inform on prevalence of pathogens and microbiological risk factors for intensive care unit (ICU) and 1-year mortality in patients with HM and pneumonia requiring invasive MV to better predict outcome and help improve treatment strategies. Study design and population This study is a retrospective single-center study.Data were collected from the ICU ward of the department of internal medicine and infectious diseases at Heidelberg University Hospital (Germany).The ICU consists of 14 treatment units equipped for intensive care including invasive MV.Annually more than 2000 patients are treated at this ICU, amongst others patients with HM and pneumonia when requiring invasive MV.Throughout the study period, hematological patients haven been initially treated at the department of hematology, including an intermediate care (IMC) ward with 16 treatment units not equipped for invasive MV.If developing ARF, hematological patients have been transferred to the ICU of the department of internal medicine and infectious diseases for more invasive treatment. We retrospectively enrolled all consecutive adult (≥ 18 years) patients with HM requiring invasive MV due to pneumonia over a period of 12 years (08/2004-07/2016).Information that could identify individual patients were made anonymous after data collection.Because of the anonymous and retrospective analysis the need for participant consent was waved by the ethics committee.`Pneumonia´ was defined as a clinical diagnosis in combination with pneumonia-suspicious findings on CT scan.Furthermore, we aimed for microbiological proof of pathogens in LRT fluids (endotracheal aspirate, bronchial fluids, or BAL fluids).Patients discharged alive within 24 h after admission were excluded from the analysis because their admission usually occurred only for invasive diagnostics or procedures (e.g.bronchoscopy).Re-admission to the ICU within 10 days after discharge occurred in four cases and was not considered a new case.Later re-admission occurred in three further patients (all after more than one year); these were thus considered new cases. In case of ARF due to pneumonia with need for intubation and invasive MV standard of care included amongst others invasive microbiological diagnostics (bronchial or BAL fluid, blood cultures), blood parameter analyses and chest CT scans.LRT fluids were tested for: Aspergillus spp., Pneumocystis jirovecii (P.jirovecii) and bacterial species detected in microbiological culture as well as atypical bacteria detected by polymerase chain reaction (PCR).Seasonal testing for CRV was performed for Influenza virus, Respiratory syncytial virus (RSV), Parainfluenza virus and Metapneumovirus and positive detection of these was considered as viral pneumonia.Additionally, we tested for reactivation of HHV (Herpes simplex virus 1 (HSV-1), Epstein-Barr-virus (EBV), Cytomegalovirus (CMV)).Viral load was assessed by quantitative polymerase chain reaction (PCR), performed on lower respiratory tract specimens (bronchoalveolar lavage or endotracheal aspirate), and results given in copies/ml.PCR was considered as positive with a threshold of > 1.000 copies/ml.With regard to longitudinal monitoring of lung viral load in patients with persistent need for invasive respiratory support, lower respiratory tract fluids were re-tested for viral load at regular intervals at the treating physician's discretion.Treatment failure for these patients was defined as lack of clinical improvement in combination with failure to achieve at least a significant reduction (> 1 log) in viral load.In individual cases additional testing for serological parameters such as Aspergillus antigen (Galactomannan) in serum was performed.Invasive pulmonary fungal disease was considered as positive if it met the European Organization for Research and Treatment of Cancer (EORTC) revised criteria of 2008 for a probable or proven invasive fungal disease [10].According to these criteria, the following applied to all of our patients with probable invasive fungal disease: • all our patients had a host factor due to their underlying disease and therapy.• the clinical criterion was fulfilled through mold-suspicious CT scans.• the mycological criterion was fulfilled through the cultural detection of Aspergillus in LRT fluids or the detection of Galactomannan in LRT fluids and serum. In four cases, cultural detection from sterile samples was additionally provided, thus fulfilling the criteria for proven invasive fungal disease. If more than one pathogenic germ was found, we categorized in primary or secondary/super-infection. CRV were consistently seen as causative for primary infection.If no virus was detectable, the germ detected at the earliest timepoint was considered as causative for primary infection.If detected in LRT fluids, the following germs were considered non-pathogenic commensals (colonization) in accordance with resent guidelines: yeasts, non-hemolytic streptococci, viridans streptococci, coagulase-negative staphylococci and Staphylococcus hemolyticus [22].If detected, HHV were not rated as pneumonia causing pathogen but as reactivated (except of cases with CT-findings highly suspicious for CMV and HSV-1 pneumonia in combination with very high counts of virus DNA (> 1.000.000copies/ml) in respiratory tract specimens). Pneumonia-suspicious CT findings taken up to three days after ICU admission were assessed for diagnosis. Patients with a total leucocyte count < 1000/µl were categorized as neutropenic. Statistical analysis We analyzed epidemiology of germs and microbiological parameters.At time of discharge from ICU and 1 year after admission we built two groups (survivors and non-survivors) and tested for association of microbiological parameters and mortality with Pearson´s Chi-Quadrat test.If less than 5 patients built a group, we used Fishers Exact Test.After univariate analysis we tested for collinearity and multivariately analyzed the risk factors for mortality by using logistic regression.Findings were regarded as statistically significant, if p-value was < 0.05.Kaplan-Meier-curves were calculated for survival analyses by using LogRank-test. This study was approved by the local ethics committee of the University of Heidelberg, Germany (authorization number: S-457/2015). Results 246 patients with HM requiring invasive MV due to pneumonia were included in this analysis.Four patients were lost to follow-up and not included in 1-year survival analysis.ICU and 1-year survival were 37.0% (91/246) and 19.0% (46/242), respectively.97 (39.4%) patients were female.At microbiological proof.In addition to clinical and radiological findings, one or more pathogenic agents were detectable in 143 (58.1%) patients.Fungal, bacterial and viral infections were detected in 89 (36.2%), 55 (22.4%) and 41 (16.7%) patients, respectively (see Fig. 1).51 (20.7%)ICU admission median age was 58 years and 117 (47.6%) patients were post-HSCT.Baseline characteristics of the study cohort are displayed in Table 1. For all included patients at least one LRT specimen was available for analysis.In 103 (41.9%) patients, the diagnosis of pneumonia was based on pneumonia-suspicious findings in CT scans combined with clinical symptoms, without Fungal pneumonia In 89 (36.2%) patients with pneumonia, a pathogenic fungal germ in respiratory specimens could be diagnosed (see Table 2).According to the EORTC criteria, four cases were classified as proven and 85 as probable invasive fungal disease [10].ICU and 1-year survival of patients with fungal pulmonary infections were 37 Multimicrobial pulmonary infections In 51 (20.7%) patients more than one pathogenic germ was detected.In Patients not surviving one year had significantly more often a CMV reactivation at intubation in univariate analysis (9% vs. 0%, p = 0.019) but showed no significant differences with respect to viral pneumonia and probable invasive Aspergillus disease frequencies (see Table 4). In five cases HHV (3x CMV, 2x HSV-1) were found as primarily causative for pneumonia.In these five cases CTfindings were highly suspicious for CMV and HSV-1 pneumonia in combination with very high counts of virus DNA (> 1 000 000 copies/ml) in LRT specimens. Detection of human herpesviruses reactivation Concomitant reactivation of HHV was detected in LRT fluids of 85 (34.6%) patients.Associated ICU and 1-year survival was 38.8% (33/85) and 23.2% (19/82), respectively.CMV, EBV, and HSV-1 were detectable in 28 (11.4%),16 (6.5%),and 50 (20.3%)patients, respectively.In 18 of the CMV-positive patients CMV was detectable at intubation timepoint while in 10 patients CMV reactivation only occurred at a later stage.Associated survival data from patients with HHV differ much and are shown in Table 2.In line with our findings, Ledoux et al. recently verified the diagnostic value of GM testing in BAL and blood samples [28].In addition, our findings indicate that proof of GM in serum in patients diagnosed with pulmonary aspergillosis is not only a diagnostic but also an outcome-relevant parameter.Nevertheless, the significance of GM testing is discussed controversially and positive testing is not considered as proven invasive aspergillosis in the latest EORTC/ MSGERC definitions of invasive fungal disease [29].Prospective studies will help to understand the diagnostic and prognostic value of GM testing. Multimicrobial pulmonary infections are often seen in immunocompromised patients and complicate the antiinfective therapy.In this study multimicrobial pneumonia was detected in 20% of patients.In particular, patients suffering from pneumonia with CRV and consecutive Aspergillus spp.superinfections showed high ICU mortality.Prevalence and mortality of multimicrobial pneumonia were similar in other studies [20,30].They also found pneumonia with CRV frequently associated with Aspergillus spp.superinfections with high mortality [20,31,32], a lesson learned also during the recent COVID-19 pandemic [33].Thus, physicians should monitor for fungal superinfections in patients with viral pneumonia and ARF. Prior to the SARS-CoV-2 pandemic, Influenza was primarily seen as the CRV predisposing for Aspergillus spp.superinfections [34].However, in this pre-COVID-19 pandemic study, the frequently detected constellation was RSV pneumonia with subsequent Aspergillus spp.superinfections.Similar to our findings, Magira et al. could show high mortality in hematological patients having this specific infectious constellation [35].Hence, in addition to Influenza and SARS-CoV-2, RSV appears to be another respiratory virus causing high morbidity and mortality, especially when combined with Aspergillus spp.superinfections.Prospective studies with more patients will help to find out if this infectious constellation (CRV and Aspergillus spp.) is a significant risk factor. Concomitant reactivation of HHV in critically ill patients is a frequent finding [36].Many efforts have been undertaken to evaluate the risk especially for CMV reactivations in patients with HM [14].We found CMV reactivations in the lung associated with high mortality.When CMV reactivation was detected early (at intubation timepoint), it was a significant risk factor for ICU mortality.Pinana et al. could also show that CMV DNAemia in context with ARF represents a risk factor for poor survival [30] while other studies showed inconclusive findings concerning the association between CMV reactivation and mortality [14].However, our findings indicate that not just the proof of CMV but the time context helps to interpret the mortality-risk associated with CMV reactivation. In multivariate analysis the following microbiological risk factors for ICU mortality were found: Probable invasive Aspergillus disease in combination with positive testing for GM in serum (p = 0.021, odds ratio 3.1 (1.2-8.0)) and CMV reactivation at intubation timepoint (p = 0.043, odds ratio 5.3 (1.1-26.8))(see Table 5).Long-term survival was not significantly associated with these microbiological risk factors in multivariate analysis (see Fig. 2a and b). Discussion In this study high ICU mortality of 63% in patients with HM requiring invasive MV due to pneumonia was found.Microbiological parameters significantly associated with death at ICU were probable invasive Aspergillus disease with positive serum-GM (both as primary infection and as superinfection on CRV) and CMV reactivation at intubation timepoint with associated ICU mortality of 78% and 89%, respectively.1-year survival of the cohort was only 19%.In multivariate analysis, there were no microbiological risk factors significantly associated with 1-year survival. In line with our findings of an ICU survival rate of 37%, other studies have also observed high mortality rates among patients with hematologic malignancies admitted to the ICU due to respiratory infections, with hospital survival rates ranging from only 30-40% [23][24][25].Hence, our study adds further proof of high mortality of infectious complications leading to need for critical care support in patients with HM.Contrary to short-term survival, we did not find a significant association between microbiological factors and 1-year survival.Similarly, other studies have shown that the long-term survival of patients with HM is not compromised by an acute, ICU-requiring illness or complication, such as respiratory infections, provided that it is survived [26]. Knowing causative germs for pneumonia helps to treat patients more efficiently and can improve the outcome [27].Thus, invasive diagnostic testing (e.g.BAL) is important for identifying causative germs as early as possible and providing adequate anti-infective therapy.However, in this study LRT fluids were analyzed extensively in all patients but pneumonia causing pathogens were identified only in about half of patients.Other studies had similar findings [12,20].In these cases, empirical therapy should be continued or (even without germ detection) adapted to radiological findings, the local spectrum of germs, and local drug resistances. In hematological patients, fungal pneumonia, particularly due to Aspergillus spp., is associated with high mortality [12,28].Our present study with ICU mortality of 66% in patients with pulmonary Aspergillosis underlines that.In combination with proof of GM in serum, Aspergillus spp.pneumonia was a significant risk factor for ICU mortality. of underdiagnosis, as pulmonary infiltrates are frequently challenging to recognize and interpret in hematological patients.Due to altered immune defense and frequent past antimicrobial therapies, hematological patients may experience colonization with facultatively pathogenic organisms.Therefore, the possibility of misinterpreting microbiological Due to the retrospective study design, there are several limitations.There was no standardized protocol for collecting microbiological and radiological data.Furthermore, our findings were not externally validated.Although our strict definition of pneumonia (radiologically proven) makes it more comparable to other studies, it bears the risk Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/. findings cannot be completely ruled out.ICU survival as a short-term parameter for mortality is not capable of fully attributing cause and effect.Other causes of mortality, such as infections outside the lungs or comorbidities, must be considered as confounders.However, the impact of the COVID-19 pandemic on the intensive care management of hematologic patients, along with the associated complications and mortality, has been demonstrated in other studies and should be considered when interpreting the results of our study [37]. Conclusion In this study high mortality in patients with HM requiring invasive mechanical ventilation due to pneumonia was found.In particular, patients suffering from multimicrobial pneumonia with CRV and consecutive Aspergillus spp.superinfections showed high ICU and 1-year mortality.However, pulmonary Aspergillosis with positive GM in serum and CMV reactivation in context with ARF could be identified as microbiological parameters significantly associated with ICU mortality. Fig. 1 Fig. 1 Figure 1 displays the number of all detected pathogenic organisms in a Venn diagram.The overlapping circles indicate which groups of pathogens were frequently detected together in co-infections Fig. 2 Fig. 2 Figure 2a displays the survival function of patients with and without Aspergillus spp.pneumonia and positive serum-Galactomannan (LogRank test).Figure 2b displays the survival function of patients with and without CMV reactivation at intubation timepoint (LogRank test) Fig. 2 Figure 2a displays the survival function of patients with and without Aspergillus spp.pneumonia and positive serum-Galactomannan (LogRank test).Figure 2b displays the survival function of patients with and without CMV reactivation at intubation timepoint (LogRank test) Table 3 Comparison of total cohort with subgroup of patients with HSCT hematopoietic stem cell transplantation, ICU survival Intensive Care Unit survival, n number of patients Table 4 Microbiological risk factors for mortality (univariate analysis) GM Galactomannan, ICU survival Intensive Care Unit survival, n number of patients, PIAD probable invasive Aspergillus disease
v3-fos-license
2020-12-03T09:07:04.984Z
2020-12-02T00:00:00.000
227254719
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7782782", "pdf_hash": "81e51de04b42eb385c3e93e9a4fcf14071e51a38", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:52", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "2cbce3b918bf9513f799764698087abf862de042", "year": 2020 }
pes2o/s2orc
Endocytosis: a pivotal pathway for regulating metastasis A potentially important aspect in the regulation of tumour metastasis is endocytosis. This process consists of internalisation of cell-surface receptors via pinocytosis, phagocytosis or receptor-mediated endocytosis, the latter of which includes clathrin-, caveolae- and non-clathrin or caveolae-mediated mechanisms. Endocytosis then progresses through several intracellular compartments for sorting and routing of cargo, ending in lysosomal degradation, recycling back to the cell surface or secretion. Multiple endocytic proteins are dysregulated in cancer and regulate tumour metastasis, particularly migration and invasion. Importantly, four metastasis suppressor genes function in part by regulating endocytosis, namely, the NME, KAI, MTSS1 and KISS1 pathways. Data on metastasis suppressors identify a new point of dysregulation operative in tumour metastasis, alterations in signalling through endocytosis. This review will focus on the multicomponent process of endocytosis affecting different steps of metastasis and how metastatic-suppressor genes use endocytosis to suppress metastasis. BACKGROUND Cancer is the second leading cause of global mortality. 1 The spread of cancer cells from the primary tumour to distant organs and their subsequent progressive colonisation is referred to as metastasis. It is estimated that 90% of cancer-related deaths are due to metastatic disease rather than to the primary tumour growth. Typically, treatments for metastatic cancer are systemic therapy involving chemotherapy or molecular drugs, hormonal agents, immune checkpoint drugs, radiation therapy or surgery. Despite progress in extending cancer-survivorship rates, 2 limited progress has been made in the treatment of metastatic cancer due to its complex nature and an inadequate understanding of the molecular and biochemical mechanisms involved. Metastasis is a multistep process involving tumour cell invasion to neighbouring areas, intravasation into the bloodstream, arrest in the capillary bed of a secondary organ, extravasation from the circulatory system and colonisation at the secondary site. 3 All of the above steps occur via complex interactions between cancer cells and their microenvironments. Despite the documented complexity and redundancy of the metastatic process, mutation or changes in the expression of single genes have been reported to alter metastatic ability. Genes that are involved in the promotion of metastasis at distant sites are referred to as metastasis promoting genes. Expression of these genes facilitates cancer cell establishment of appropriate interactions with changing microenvironments to promote continued survival and proliferation at secondary sites. Similarly, genes that inhibit the process of metastasis without affecting the growth of the primary tumour are referred to as metastasis suppressor genes and are described in detail in the later part of this review. This review will highlight an often-overlooked aspect of metastasis, receptor endocytic pathways. Contributing to each step in metastasis is the distribution of multiple cell-surface receptors on tumour and microenvironmental cells. Receptor signalling is, in turn, modulated by endocytosis (internalisation, recycling or degradation). In recent years, there has been significant progress made towards understanding the mechanisms of the endocytosis pathway and its alterations that occur during metastasis. A growing body of literature suggests that receptor endocytosis affects metastasis and could be a tool for the functioning of metastasis suppressor or metastasis promoters. This review will focus on the role of endocytosis in metastasis and how these pathways are used by metastasis suppressors. ENDOCYTIC PATHWAYS AND METASTASIS The term 'endocytosis' is derived from the Greek word 'endon', meaning within, 'kytos', meaning cell and '-osis', meaning process. So, endocytosis is the process by which cells actively internalise molecules and surface proteins via an endocytic vesicle. Depending on the cargo type, internalisation route and scission mechanism, there are three general modes of vesicular endocytic trafficking that coexist in the cell and operate concurrently: phagocytosis, pinocytosis and receptor-mediated endocytosis. In phagocytosis, the cell's plasma membrane surrounds a macromolecule (large solid particles > 0.5 μm) or even an entire cell from the extracellular environment and generates intracellular vesicles called phagosomes. 4 Cellular pinocytosis/cellular drinking is a process in which fluids and nutrients are ingested by the cell, by pinching in and forming vesicles that are smaller than the phagosomes (0.5-5 μm). 5 Both phagocytosis and pinocytosis are non-selective modes of taking up molecules. However, there are times when specific molecules are required by cells and are taken up more efficiently by the process of receptor-mediated endocytosis (RME). The endocytosis of specific cargoes via specific receptors can take place by clathrin-mediated (CME), caveolae-mediated (CavME), clathrin-and caveolae-independent endocytic (CLIC/GEEC) pathways. These endocytic pathways are briefly described below. Table 1 links selected endocytic proteins to in vitro components of the metastatic process and in vivo metastasis in cancer. Clathrin-mediated endocytosis (CME) The most studied endocytic mechanism is CME. It was first found to play an important role in low-density lipoprotein 6 and transferrin uptake. 7 It is known to be involved in internalisation and recycling of multiple receptors engaged in signal transduction (G-protein and tyrosine-kinase receptors), nutrient uptake and synaptic vesicle reformation. 8 Clathrin-coated pits (CCP) are assemblies of cytosolic coat proteins, which are initiated by AP2 (assembly polypeptide 2) complexes that are recruited to a plasma membrane region enriched in phosphatidylinositol-(4,5)-bisphosphate lipid. 9 AP2 acts as a principal cargo-recognition molecule and recognises internalised receptors through a short sequence motif in their cytoplasmic domains. 10 As the nascent invagination grows, AP2 and other cargo-specific adaptor proteins recruit and concentrate the cargo, which is now facing the inside of the vesicle. Following cargo recognition/concentration, AP2 complexes along with other adaptor proteins to recruit clathrin. Clathrin recruitment stabilises the curvature of the growing CCP with the help of BAR (Bin-Amphiphysin-Rvs)-domain-containing proteins until the entire region invaginates to form a closed vesicle. 11 Release of mature clathrin-coated vesicles from the plasma membrane is performed by the large multi-domain GTPase, Dynamin. Proteins such as amphiphysin, endophilin and sorting nexin 9 (BAR-domain-containing proteins) recruit Dynamin around the necks of budding vesicles. 12 Similarly, other Dynamin partners (i.e., Grb2) also bind to Dynamin and increase its oligomerisation, which results in a higher GTPase activity. 13 Oligomerised Dynamin assembles into collar-like structures encircling the necks of deeply invaginated pits and undergoes GTP hydrolysis to drive membrane fission. 14 After a vesicle is detached from the plasma membrane, the clathrin coat is disassembled by the combined action of the ATPase HSC70 and the coat component auxilin. 15,16 The released uncoated vesicle is ready to travel and fuse with its target endosome. Signalling through CME is critical in cancer and metastasis. Clathrin light-chain isoform (CLCb) is specifically upregulated in non-small-cell lung cancer (NSCLC) cells and is associated with poor prognosis. NSCLC cells expressing CLCb exhibit increased rates of CME through Dynamin 1. This leads to activation of a positive feedback loop involving enhanced epidermal growth factor receptor (EGFR)-dependent Akt/GSK-3β (glycogen synthase kinase 3β) phosphorylation, resulting in increased cell migration and metastasis. 17 Dynamin 2 is crucial for the endocytosis of several proteins known to be involved in cancer motility and invasiveness (e.g., β-1 integrin and focal adhesion kinase). Dynamin 2 overexpression correlates with poor prognosis. 18 The regulation of certain receptors that are known to affect cancer and metastasis (i.e., EGFR and transforming growth factor β receptor (TGFβR)) by clathrin-and non-clathrin-mediated internalisation pathways preferentially targets the receptors to different fates (i.e., recycling or degradation). 19,20 Different fates of receptors determine the net signalling output in a cell and affect cancer progression. Interestingly, CME is known to skew EGFR fate towards recycling rather than degradation, leading to prolonged duration of signalling. 20 Similarly, the internalised EGF-EGFR complex may maintain its ability to generate cell signalling from endosomes affecting multiple downstream pathways. 21 This active endosomal EGFR is known to regulate oncogenic Ras activity by co-internalising its regulators including Grb2, SHC, GAP and Cbl. 21,22 Caveolae-mediated endocytosis (CavME) CavME is the second most studied pathway of endocytosis and has been shown to be important in transcytotic trafficking across cells and mechanosensing. 23 The CavME process involves formation of a bulb-shaped, 50-60-nm plasma membrane invaginations called caveolae (little caves), which is driven by both integral membrane proteins called caveolins and peripheral membrane proteins called cavins (cytosolic coat proteins). Caveolins (encoded by CAV-1, 2 and 3 paralogues) are small integral membrane proteins that are inserted into the inner side of the membrane bilayer through its cytosolic N-terminal region that binds to cholesterol. About 50 cavin molecules associate with each caveolae and exist in a homo-or hetero-oligomeric form (using four cavin family members). 24 CavME is triggered by ligand binding to cargo receptors concentrated in caveolae. Budding of caveolae from the plasma membrane is regulated by kinases and phosphatases, such as Src tyrosine kinases and serine/threonine protein phosphatases PP1 and PP2A. 25 As with CME, Dynamin is required to pinch off caveolae vesicles from the plasma membrane. 26 Components of CavME have a vital role in cell migration, invasion and metastasis. It is speculated that CAV-1 has a dual role in cancer progression and metastasis. In the early stages of the disease, it functions predominantly as a tumour suppressor, whereas at later stages, its expression is associated with tumour progression and metastasis. [27][28][29] As with a tumour suppressor, CAV-1 is often deleted in human cancers and mechanistically known to act through the caveolin scaffolding domain (CSD) by inhibiting cytokine receptor signalling. 28,30 The CAV-1 effect on the late-stage tumour progression and metastasis has been attributed to tyrosine (Tyr14) phosphorylation of its protein product by Src kinases, leading to increased Rho/ROCK signalling and subsequent focal adhesion turnover. 31 Knockdown of CAV-1 in breast and prostate cancer cells reduced the velocity, directionality and persistency of cellular migration. 31,32 Similarly, expression of CAV-1 has been used as a marker of prognosis and overall survival in various types of human cancer. In pancreatic adenocarcinoma, positive expression of CAV-1 was found to correlate with tumour diameter, histopathological grade and poor prognosis. In lung cancer, CAV-1 expression statistically correlates with poor differentiation, pathological stage, lymph-node metastasis and poor prognosis. However, in hepatocellular carcinoma tissues, low expression of CAV-1 is associated with poor prognosis. 33 Clathrin-independent endocytosis (CIE) As per the name, the endocytic vesicles involved in CIE have no distinct coat and were first discovered by their resistance to inhibitors that block CME and CavME. 34 CIE encompasses several pathways. (i) An endophilin-, Dynamin-and RhoA-dependent pathway for endocytosis of interleukin-2 receptor. 35 (ii) A clathrinand Dynamin-independent (CLIC/GEEC) pathway in which the GTPases RAC1 and CDC42 lead to actin-dependent formation of clathrin-independent carriers (CLICs). This, in turn, forms the glycosylphosphatidylinositol (GPI)-AP-enriched endosomal compartments (GEECs). 36,37 (iii) An ARF6-dependent pathway involving the small GTPase ARF6, to activate phosphatidylinositol-4phosphate 5-kinase that produces phosphatidylinositol-(4,5)bisphosphate, leading to stimulation of actin assembly and endocytosis. 38 The CIE pathway has been shown to suppress cancer cell blebbing and invasion through GTPase-activating protein GRAF1 (GTPase regulator associated with focal adhesion kinase-1) ( Table 1). 39 Various receptors are endocytosed by the CIE pathway, including interleukin-2 receptor (IL-2R), T-cell receptor (TCR) and GPI-linked proteins. 40 DOWNSTREAM ENDOSOMAL TRAFFICKING Internalised receptor-ligand cargoes can merge into a common endosomal network by undergoing multiple rounds of fusions. The first set of fusion leads to the formation of early endosomes Endocytosis: a pivotal pathway for regulating metastasis I Khan and PS Steeg Caveolin-mediated endocytosis (CavME) Caveolin Major coat protein of caveolae and involved in invagination of lipid raft domain In early stages of the disease caveolin functions predominantly as a tumour suppressor, whereas at later stages its expression is associated with tumour progression and metastasis. The late-stage tumour progression and metastasis effect of CAV-1 has been attributed to tyrosine (Tyr14) phosphorylation of its protein product by Src kinases. Hepatomas, ovarian cancers, prostate cancer and breast cancer 30,31 CAV-1 is often deleted in human cancers and acts by inhibiting cytokine receptor signalling. Breast cancer 31,32 Knockdown of CAV-1 reduced velocity, directionality and persistency of cellular migration. Breast and prostate cancers 31,32 Positive expression of CAV-1 is a marker of histopathological grade and poor prognosis (pancreatic cancer). Low expression of CAV-1 is associated with poor prognosis (hepatic cancer). where initial sorting routes are engaged, and the fate of the internalised receptors is decided (Fig. 1). Early endosomes are identified by the association of several proteins on their cytosolic surface, including RAB5, along with its effector VPS34/p150, a phosphatidylinositol 3-kinase complex. VPS34/p150 generates phosphatidylinositol 3-phosphate, which regulates the spatiotemporal and compartmentalisation aspects of endosomal functions. 41,42 Structurally, early endosomes have tubular (membrane) and vacuolar (vacuoles) domains. Most of the membrane surface area lies in the tubules, while much of the volume is in the vacuoles. The membrane domains are enriched in proteins, including RAB5, RAB4, RAB11, ARF1/COPI, retromer and caveolin. 43,44 These proteins are involved in multiple functions, including molecular sorting of early endosomes to distinct organelles, its recycling and maturation to late endosomes or to the trans-Golgi network (TGN) (Fig. 1). The role of these endocytic proteins in metastasis in vivo and their prognostic potential, if any, have been listed in Table 1. A recycling pathway returns endosomes to the cell surface either by a fast recycling route (via RAB4-positive endosomes) or by a slow recycling route (via RAB11-positive endosomes). 45 Internalised receptors in early endosomes can be sorted into the recycling pathway through an extensive tubulation of the early endosome membranes in a process called 'geometry-based sorting' wherein receptors that are sorted into the newly formed tubular membranes of the early endosome are recycled back to the plasma membrane. Intralumenal vesicles (ILVs) also form in early endosomes, driven by clathrin and components of the endosomal sorting complex required for transport (ESCRT). 46 ESCRT-mediated receptor sorting into ILVs is an evolutionarily conserved process that is required for multivesicular body (MVB) formation. ESCRT uses its various complexes for receptor recognition (ESCRT-0), inward budding (ESCRT-I and II) and final ESCRT-III-mediated abscission. 47 This separates the cytoplasmic portion of the receptors from the rest of the cell, leading to abrogation of its signalling. Interestingly, depletion of ESCRT-0 and ESCRT-I subunits inhibits the degradation of EGFR and results in enhanced recycling and sustained activation of extracellular signal-regulated kinase (ERK) signalling. 48,49 A role for endosomal acidification and ligand dissociation has also been established. Recycling of receptors to the plasma membrane takes place if the ligands are released in the early endosome (i.e., transferrin receptor), where the pH is maintained at~6.5. 50 Conversely, some signalling receptors (i.e., EGFR) often retain ligand binding and remain active even at low (~4.5) pH, leading to their continual signalling from endosomal compartments until they are sorted into ILVs and degraded in the lysosome. 51 Some internalised receptors in early endosomes can be sorted to the TGN in a process called retrograde transport (i.e., mannose-6-phosphate receptors and several toxins such as Shiga, cholera and ricin). The TGN is a network of interconnected tubules and vesicles at the trans-face of the Golgi apparatus. It is essential for maintaining cellular homoeostasis and is known to play a crucial role in protein sorting or diverting proteins and lipids away from lysosomal degradation. Mature late endosomes are approximately 250-1000 nm in diameter and are round/oval in shape. They are characterised by the presence of RAB7-GTPase, which is fundamental for the maturation of early-to-late endosomes and for the lysosomal biogenesis. Maturation of early-to-late endosomes depends on the formation of a hybrid RAB5/RAB7 endosome, wherein RAB7 is recruited to the early endosome by RAB5-GTP. 52 Late endosomes undergo homotypic fusion reactions, grow in size and acquire more intraluminal vesicles. Once intralumenal vesicles containing late endosomes become enriched with RAB35, RAB27A, RAB27B and their effectors Slp4 and Slac2b, they fuse to plasma membrane to release exosomes. 37 The released exosomes are small (40-100 nm in diameter), single membrane-bound vesicles that contain protein, DNA and RNA. Mostly, however, late endosomes move to the perinuclear area of the cell in the vicinity of lysosomes using dynein-dependent transport. Here, late endosomes undergo transient fusions with each other and eventually fuse with lysosomes to generate a transient hybrid organelle called the endolysosome. It is in the endolysosomes in which most of the hydrolysis of endocytosed cargo takes place. 37 Following a further maturation process, the endolysosome is converted into a classical dense lysosome. Cellular contents and organelles can also be delivered to lysosomes through a separate pathway called autophagy. Autophagy or self-eating is a unique membrane trafficking process whereby a newly formed isolation membrane can elongate and engulf part of the cytoplasm or organelles to form autophagosomes that are delivered to the lysosome for Mutant p53 drives metastasis in autochthonous mouse models of pancreatic cancer by controlling the production of sialomucin, podocalyxin and activity of the RAB35 GTPase, which interacts with podocalyxin to influence its sorting to exosomes. These exosomes influence integrin trafficking in normal fibroblasts to promote deposition of a highly pro-invasive ECM. Endocytosis: a pivotal pathway for regulating metastasis I Khan and PS Steeg degradation. There are an increasing number of reports pointing to a mechanistic role for autophagy in the process of tumour metastasis, detailed in a recent review. 53 An astonishing number of endosomal trafficking pathway proteins are known to be functionally important in tumour progression and metastasis (Table 1). Many have been validated in cancer cell motility and invasion, but a considerable number have been shown to modulate in vivo metastasis. The alterations identified include up-or down-regulation of expression, or mutation, and generally lead to an aberrant receptor trafficking/ recycling/degradation/signal duration, which has a profound effect on cancer cell migration, invasion and/or proliferation. While most of these reports focus on a single signalling pathway, it is likely that multiple pathways are also affected. These mechanistic studies cover a wide range of cancer types. Additional details on different endosomal trafficking members and their role (s) in cancer and metastasis can be found in recent reviews. [54][55][56] INTEGRIN AND EXTRACELLULAR MATRIX TRAFFICKING IN METASTASIS Cancer cells invade through the extracellular matrix (ECM) in part by producing matrix metalloproteinases (MMPs) and other proteinases that degrade the ECM, thereby creating paths for migration. Similarly, cells attach to the ECM by means of integrins that are key regulators of cell adhesion, migration and proliferation. The interplay between integrins and ECM remodelling proteases is a major regulator of tumour invasion. In oral squamous cell carcinoma (SCC), increased αVβ6 integrin expression leads to the activation of MMP-3 and promotes oral Fig. 1 Endosomal trafficking and metastasis suppressor genes. A wide variety of receptors and their ligands are moved intracellularly by endocytosis. Clathrin-mediated endocytosis begins with initiation and maturation of clathrin-coated pits by AP2 complexes that are recruited to the plasma membrane and act as a principal cargo-recognition molecule. As the nascent invagination grows, AP2 and other cargo-specific adaptor proteins recruit and concentrate the cargo. AP2 complexes along with other adaptor proteins to recruit clathrin. Clathrin recruitment stabilises the curvature of the growing pit with the help of other BAR-domain-containing proteins. BAR-domain-containing proteins also recruit Dynamin to the neck of the budding vesicle, until the entire region invaginates to form a closed vesicle. Dynamin is a large GTPase, which forms a helical oligomer around the constricted neck and, upon GTP hydrolysis, mediates the fission of the vesicle to release it into the cytoplasm. Following vesicle detachment from the plasma membrane, the clathrin coat is disassembled. The released vesicle goes through a first set of fusion, leading to formation of early endosomes, where initial sorting decisions are made, and the fate of the internalised sorting proteins and lipids is decided. The RAB proteins primarily localised to the early endosome include RAB5 and RAB4, along with lesser-known RAB21 and RAB22. They regulate the motility of early endosome on actin and microtubule tracks, its homotypic fusion and specialised functions of sorting and trafficking. The internalised receptors can be sorted into recycling pathways through extensive tubulation of the early endosome membranes, wherein receptors that are sorted into the newly formed tubular membranes recycle back to the plasma membrane through recycling endosomes. Alternately, early endosome growth and maturation could lead to the trans-Golgi network (TGN) or to late endosomes. Mature late endosomes are approximately 250-1000 nm in diameter and are characterised by the generation of a RAB7 domain. Late endosomes undergo homotypic fusion reactions, grow in size and acquire more intralumenal vesicles (ILVs). ILVs containing late endosomes get enriched with RAB35 and RAB27 and their effectors that promote their fusion to plasma membrane to release exosomes (40-100 nm in diameter vesicles). Predominantly, late endosomes move to the perinuclear area of the cell where they undergo transient fusions with each other and eventually fuse with lysosomes for degradation of its content. Cellular proteins synthesised in the rough endoplasmic reticulum (ER) are constantly secreted from ER to the Golgi complex in mammals through an ER-Golgi intermediate compartment (ERGIC). Points where metastasis suppressors interact with the endocytic process are highlighted. Endocytosis: a pivotal pathway for regulating metastasis I Khan and PS Steeg SCC cell proliferation and metastasis in vivo. 57 MMP-14 (membrane type 1 metalloprotease MT1-MMP), along with integrin αVβ3 co-localised to the protruding ends of invadopodia, and its high local concentration on the cell membrane promoted metastasis. 58 Interestingly, WDFY2 (a cytosolic protein) controls the recycling of MT1-MMP to the membrane, and loss of WDFY2 leads to enhanced secretion of MT1-MMP leading to active invasion of cells. 59 Recent studies highlight the importance of integrin trafficking (endocytosis and recycling) as a modulator of cancer cells' fate. For example, rapid recycling of integrins from the leading edge of individual cells assists in efficient cell motility by providing a supply of fresh receptors that are internalised at the trailing edge. More details on the trafficking of MMPs and integrins and its role in metastasis can be found in recent reviews. 60,61 METASTASIS SUPPRESSORS AND ENDOCYTOSIS Metastasis suppressors are a group of genes that suppress the metastatic potential of cancer cells without significantly affecting the size of primary tumour. 62 So far, more than 20 metastasis suppressor genes (including miRNAs) have been identified in multiple cancer types with a wide range of biochemical activities. 63 Some of the metastasis suppressor genes working through alterations in endocytosis are described below: NME1 (NM23/NM23-H1, non-metastatic clone 23, isoform H1) NME is a multifunctional protein that is highly conserved from yeast to humans. Its enforced expression suppressed metastasis in a variety of cancer cell lines without altering primary tumour growth. 64 Apart from being a metastasis suppressor, it is also known to have a developmental function. The Drosophila homologue of NME is awd (abnormal wing discs) and is known to regulate cell differentiation and motility of multiple organs in late embryogenesis by regulating growth factor receptor signalling through endocytosis. These studies identified a genetic interaction between awd and dynamin (shi). 65 An aberrant endocytosis was associated with mutant awd phenotypes and complemented RAB5 or shi genes. [65][66][67] It was also shown that awd regulated tracheal cell motility in development by modulating the fibroblast growth factor receptor (FGFR) levels through dynaminmediated endocytosis. 65,68 Interestingly, loss of awd gene also blocked Notch signalling by altering the receptor processing that leads to Notch accumulation in the early endosomes. 67 Recent reports in mammalian cancer models have also highlighted the role of NME as an interacting partner of Dynamin in endocytosis. 69,70 NME transfectants of multiple cell lines exhibited increased endocytosis of EGFR and transferrin in concert with motility suppression. Both the increased endocytic and motilitysuppression phenotypes were blocked by inhibitors of Dynamin. In a lung-metastasis assay, NME1 overexpression failed to significantly suppress metastasis in cells in which Dynamin 2 was also knocked down. Using the EGF/EGFR signalling axis as an in vitro model, NME1 decreased the phospho-EGFR and phospho-Akt levels in a Dynamin 2-dependent manner, highlighting the relevance of this interaction for downstream signalling. It was speculated that NME acted as a GTP provider/oligomerising agent of Dynamin 2, leading to higher Dynamin 2 GTPase activity and increased endocytosis (Fig. 1). 69,70 Our data identified another function of a NME-Dynamin interaction: in vitro, NME promoted the oligomerisation of Dynamin and its increased GTPase activity, which are needed for vesicle scission. 69 KAI1 (CD82, cluster of differentiation 82) KAI1/CD82 is a member of the evolutionarily conserved tetraspanin family, and was initially identified as a metastasis suppressor in prostate cancer. 71 KAI1 has since been established as a metastasis suppressor in a variety of solid tumours. Its higher expression predicts a better prognosis, [72][73][74] whereas reduced expression of KAI1 has been widely correlated with an aggressive cancer in several cancer types, including pancreatic, hepatocellular, bladder, breast and non-small-cell lung cancers. 73,75,76 KAI1-mediated suppression of metastasis is thought to be achieved primarily by inhibiting cancer cell migration and invasion. 77 This phenotype is the result of forming oligomeric complexes with binding partners such as integrins, EGFR and intracellular signalling proteins, such as protein kinase C (PKC). This complex generally leads to either redistribution or increased internalisation of multiple receptors. For example, overexpression of KAI1 leads to redistribution of urokinase-type plasminogen activator receptor (uPAR) into a stable complex with integrin α5β1 in focal adhesions. 78 Focal adhesion binding of uPAR reduces its ability to bind the ligand uPA and consequently to cleave and activate plasminogen. Similarly, KAI1 also binds with EGFR, ErbB2 and ErbB3; for EGFR, this leads to accelerated endocytosis and desensitisation. 79,80 KAI1 also specifically inhibits ligand-induced EGFR dimerisation and alters the distribution of EGFR in the plasma membrane, which consequently affects its activation. 80 MTSS1 (metastasis suppressor protein 1 or MIM, missing in metastasis) MTSS1/MIM, originally identified in bladder cancer cell lines, was present in non-metastatic but not metastatic bladder cancer cells. 81 It is hypothesised that MTSS1 suppresses metastasis by acting as a scaffold protein to interact with actin-associated proteins to regulate cytoskeletal dynamics and lamellipodia formation, consequently affecting invasion and metastatic behaviour of cancer cells. 82 In head and neck squamous cell carcinoma, MTSS1 augments EGF signalling by antagonising EGFR endocytosis at low cell densities and promotes cellular proliferation at early stages of primary head and neck squamous cell carcinoma tumour growth. However, at high cell densities, MTSS1 has a negative impact on EGF signalling and inhibits metastasis. 83 KISS1 (kisspeptin-1) The KISS1 gene produces a peptide product called kisspeptins (KP), which act as an endogenous ligand for a G-protein-coupled receptor, KISS1R (GPR54). 84 KISS1 acts as a metastasis suppressor gene through its KP/KISS1R signalling in numerous human cancers (melanoma, pancreatic cancer and gastric carcinoma) by inhibiting cellular motility, proliferation, invasion, chemotaxis and metastasis. 85 However, in breast cancer, KP stimulates invasion of cancer cells and high expression of KISS1; GPR54 mRNA levels positively correlated with shorter relapse-free survival. Interestingly, GPR54 directly complexes with EGFR, and stimulation of breast cancer cells by either EGF or KP-10 regulated the endocytosis of both GPR54 and EGFR. 86 This signalling has an opposite effect on breast cancer cells, i.e., it is pro-migratory and pro-invasive in human breast cancer cells. Metastasis suppressor genes, while often showing statistically significant inverse trends of tumour expression and patient survival, are not likely to become clinically used prognostic factors, in favour of more complex gene signatures. As with tumour suppressors, their translation to the clinic has also been problematic. Restoration of metastasis suppressor expression in every metastatic tumour cell would be needed for optimal activity, which is unrealistic. Our laboratory explored the transcriptional upregulation of NME by high-dose medroxyprogesterone acetate. 87 A Phase 2 trial, conducted at Indiana University, was a technical failure, as serum levels of medroxyprogesterone acetate were not sufficiently elevated, although some long-term stable disease was observed. 88 How the endocytic pathways can contribute to a metastatic-suppressor clinical-translational effort is currently unknown but of high interest. More research to identify the complex mechanisms underlying these processes is warranted. Endocytosis: a pivotal pathway for regulating metastasis I Khan and PS Steeg CONCLUSIONS Endocytosis is a process of internalisation of the plasma membrane along with its membrane proteins and lipids. Cells use endocytosis to regulate signalling and to sample the extracellular milieu for appropriate responses. It affects almost all of the steps of metastasis and is used as a tool for the functioning of metastasis suppressors. Based on the literature, endocytosis regulates receptor internalisation, recycling and degradation, or could affect cytoskeleton dynamics to alter cancer cell invasion or metastasis. However, the majority of the above conclusions have been made based on studies conducted on cancer cell lines. These studies would benefit from validation on patient-derived tissues. Other challenges in this field are a lack of high-resolution knowledge of the endosomal sorting complexes and their central regulators, and how signalling in cancer cells is altered at specific stages of endocytosis. These issues will undoubtedly be clarified as research progresses. Identification of these central regulators could serve as trafficking nodes that are amenable to therapeutic interception. A potential issue in translation is the effect of an inhibitor of an endocytic node on multiple signalling pathways that it engages, and how the cumulative effects modulate the metastatic phenotype. This issue is not unique to endocytosis and applies to DNA methylation and other cancer processes. In summary, targeting the endocytic machinery could be a viable and promising therapeutic strategy for cancer and metastasis. AUTHOR CONTRIBUTIONS I.K. and P.S.S. reviewed the literature, drafted and revised the paper. ADDITIONAL INFORMATION Ethics approval and consent to participate Not applicable. Consent to publish Not applicable. Data availability Not applicable. Competing interests The authors declare no competing interests. Funding information This work is supported by the NIH Intramural program. Note This work is published under the standard license to publish agreement. After 12 months the work will become freely available and the license terms will switch to a Creative Commons Attribution 4.0 International (CC BY 4.0). Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
v3-fos-license
2018-05-09T00:43:46.005Z
2018-05-07T00:00:00.000
19218156
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.1516", "pdf_hash": "c9e1c31feec33a2e49b855e877fee9a9f2e6ff0d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:53", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "b9c981d5699fed4e2f6edc17068175d02363d687", "year": 2018 }
pes2o/s2orc
A multicenter phase II trial of neoadjuvant letrozole plus low‐dose cyclophosphamide in postmenopausal patients with estrogen receptor‐positive breast cancer (JBCRG‐07): therapeutic efficacy and clinical implications of circulating endothelial cells Abstract Neoadjuvant endocrine therapy has been reported to decrease tumor size, which leads to increased breast conservation rates. To improve the clinical response, metronomic chemotherapy with endocrine therapy is a promising strategy. A multicenter phase II single‐arm neoadjuvant trial with letrozole and cyclophosphamide was conducted. Eligibility criteria included postmenopausal status, T2–4 N0–1, and estrogen receptor‐positive breast carcinoma. Letrozole (2.5 mg) plus cyclophosphamide (50 mg) was given orally once a day for 24 weeks. The primary endpoint was the clinical response rate (CRR). To investigate anti‐angiogenic effects, circulating endothelial cells (CECs) were quantified using the CellSearch system. From October 2007 to March 2010, 41 patients were enrolled. The CRR was 67.5% (52.0–80.0%), which was above the prespecified threshold (65%). The conversion rate from total mastectomy to breast‐conserving surgery was 64% (18/28). Grade 3 or greater nonhematological toxicity was not reported. Clinical response was associated with improved disease‐free survival (DFS) (P = 0.020). The increase in CEC counts at 8 weeks was observed in nonresponders (P = 0.004) but not in responders. Patients with higher CEC counts at baseline or post‐treatment showed worse DFS than those with lower counts (P < 0.001 at baseline and = 0.014 post‐treatment). Multivariate analysis showed that post‐treatment CEC counts but not pretreatment counts were independently correlated with DFS (P = 0.046). In conclusion, neoadjuvant letrozole plus cyclophosphamide showed a good clinical response for postmenopausal patients with estrogen receptor‐positive breast cancer. CEC quantification is a promising tool for treatment monitoring and prognostic stratification for metronomic therapy following validation of our results in larger studies. Clinical trial registration number: UMIN000001331 Phase II study of neoadjuvant letrozole combined with low‐dose metronomic cyclophosphamide for postmenopausal women with endocrine‐responsive breast cancer (JBCRG‐07) Introduction Neoadjuvant endocrine therapy (NET) is one of the treatment options for postmenopausal patients with endocrineresponsive breast cancer. NET has been reported to result in decreased tumor size and increased breast conservation rates [1][2][3][4][5][6]. Because endocrine therapy is associated with lower toxicity than chemotherapy, NET is preferable to neoadjuvant chemotherapy, especially in older patients and those with worsening performance status. In order to further improve surgical outcome, it is important to increase the NET response rate without increasing adverse effects. Chemo-endocrine therapy using metronomic chemotherapy is potentially useful in this regard. Metronomic chemotherapy is the delivery of low doses of cytotoxic drugs at regular frequent intervals to avoid toxic side effects [7,8]. It has been suggested to act via multiple mechanisms in exerting anticancer effects, including anti-angiogenesis, antitumor immune response, and direct anticancer action [9]. Oral cyclophosphamide is one of the most commonly used metronomic agents and is administered alone or together with other drugs such as capecitabine and methotrexate [10][11][12][13]. Because metronomic chemotherapy shows anticancer effects via different mechanisms of action without overt toxic side effects, it is a good candidate in combination with endocrine therapy. The combined administration of the aromatase inhibitor letrozole with low-dose metronomic cyclophosphamide in elderly patients has been reported earlier [14]. This randomized phase II trial showed an overall response rate (ORR) of 87.7% in patients assigned to receive letrozole plus cyclophosphamide, while letrozole alone showed an ORR of 71.9%. In addition, post-treatment expression of Ki-67 was significantly lower in tumors treated with the combined therapy than in tumors treated with letrozole alone. Thus, the combination of letrozole and oral cyclophosphamide appears effective and promising as neoadjuvant therapy. We conducted a multicenter phase II single-arm trial of neoadjuvant metronomic chemo-endocrine therapy with letrozole and oral cyclophosphamide in Japan (Japan Breast Cancer Research Group-07 trial: UMIN000001331). To investigate the possible role of anti-angiogenic effects in metronomic chemo-endocrine therapy, circulating endothelial cells (CECs) were quantified prior to and during the neoadjuvant treatment, and their association with treatment response and prognosis was examined. Patients and Methods Patients Women with previously untreated, clinical T2-4 N0-1 and estrogen receptor (ER)-positive breast carcinoma were enrolled in this study. Other inclusion criteria were (1) postmenopausal status and age 60 years or older; (2) 0-1 Eastern Cooperative Oncology Group performance status; and (3) written informed consent for participation in this study. Patients receiving agents that affect sex hormone status, such as hormone replacement therapy and raloxifene, were excluded. Consecutive patients who met the inclusion criteria and agreed to participate in the study were recruited from October 2007 to March 2010. Written informed consent was obtained from all patients who participated in the study. The study conforms to the provisions of the Declaration of Helsinki. Treatment Patients received letrozole (2.5 mg) plus cyclophosphamide (50 mg) orally once a day for 24 weeks, and surgical therapy was conducted 1-4 weeks after the last administration of letrozole and cyclophosphamide. Anticipated surgery type before treatment and surgery type actually performed were compared to investigate whether preoperative therapy resulted in a higher breast conservation rate. Adverse events (AEs) were assessed in accordance with Common Terminology Criteria for Adverse Events (CTCAE) version 3.0. No letrozole reduction was planned. Letrozole was planned to be interrupted when severe AEs occurred. Cyclophosphamide administration was delayed if the leukocyte count was <2000/mm 3 or if the neutrophil count was <1000/mm 3 . In the event of grade 2 or greater cystitis and other grade 3 nonhematological AEs, cyclophosphamide was interrupted and postponed until recovery. Postoperative radiation therapy, chemotherapy, trastuzumab, and endocrine therapy were given as per institutional practice. The protocol was approved by the ethics committee in each institute. Endpoints The primary endpoint was the clinical response rate, assessed using calipers, ultrasound (US), or computed tomography (CT)/magnetic resonance imaging (MRI) during the 24-week neoadjuvant treatment period in the intention-to-treat (ITT) population. Tumor response was evaluated in accordance with RECIST ver. 1.0 [15]. Secondary endpoints included pathological therapeutic effects, breast conservation rate, safety assessed using CTCAE ver. 3.0, disease-free survival (DFS), and overall survival (OS). The target number of patients in the protocol was set at 40 based on the response rate of 88% in a previous [14]. On the assumption that the expected response rate was 85%, 33 patients would be required for verification of effectiveness under conditions of 65% threshold (based on NET results using letrozole alone), 5% onesided significance level, and 80% detection power. Pathological analyses Pathological analyses were performed in a central laboratory. Tumor biopsy specimens before preoperative therapy were assessed for estrogen receptor, progesterone receptor (PgR), and human epidermal growth factor receptor type 2 (HER2). ER and PgR status were defined as positive for tumors with 10% or more positive tumor cells. HER2 positivity was determined as strong expression (3+) using immunohistochemistry or as HER2:CEP17 ratio >2.2 using fluorescence in situ hybridization [16]. The pathological response was assessed using surgical samples following preoperative therapy. A pathological complete response (pCR) was defined as no residual invasive tumor cells in the mammary gland and lymph nodes. Grade 2 response was defined as reduction in tumor cells by more than two-thirds (66%), and grade 1 was defined as reduction in tumor cells ≤ one-third (33%). The Ki-67 labeling index (LI) using the MIB1 antibody (Dako, Glostrup, Denmark) was calculated by counting positively stained tumor cells per 1000 tumor cells in the hot spots. Circulating endothelial cells Blood samples were drawn into CellSave tubes (Veridex, LLC, NJ) prior to, at 8 weeks after treatment initiation, and at completion of the neoadjuvant treatment. Samples were sent to the central laboratory at Kyoto University where they were processed within 72 h after blood sampling. All evaluations were performed without prior knowledge of the patients' clinical status. The CellSearch system was used for endothelial cell detection, as described previously [17][18][19]. In brief, magnetic separation was performed using anti-CD146 ferrofluids, followed by labeling with the nuclear stain 4,6-diamidino-2-phenylindole (DAPI), a phycoerythrin-conjugated anti-CD105 antibody, and an allophycocyanin-conjugated anti-CD45 antibody. An additional channel was used for an anti-CD34 antibody conjugated to FITC (clone AC136, Miltenyi, Biotech GmbH, Germany). CECs were defined as CD146 + CD105 + CD45 − DAPI + cells in this study. As CD34 is another maker that is positive in circulating endothelial cells, anti-CD34 antibody was added to the additional channel [20]. A gray-scale charge-coupled camera device was used to scan the entire chamber surface, and each captured frame was then evaluated for objects that were potential CEC candidates using image analysis software. Statistical analysis Baseline characteristics of patients were summarized as mean (range) for continuous variables and number (%) for categorical variables. The clinical/pathological response rate and the breast conservation rate were calculated at 95% confidence intervals (CIs). AEs during treatment were tabulated based on their CTCAE grades. OS and DFS during follow-up were estimated and compared using the Kaplan-Meier method and log-rank test between groups stratified based on patient characteristics and clinical/pathological outcomes of the neoadjuvant treatment. In biomarker analysis, association of CECs (as continuous variables) with clinical response was evaluated using univariate logistic regression models. The optimal cut-off value for each statistically significant biomarker to predict clinical response was determined using the Youden's index of the receiver operating characteristics (ROC) curve. Patients were stratified based on the cut-off value into two groups, and the survival rate was compared between them. Multivariate survival analyses were performed using Cox proportional hazards models consisting of statistically significant variables from the survival analyses mentioned above. Multicollinearity was assessed using Spearman's rank correlation coefficient. To address data sparseness, Firth's penalized likelihood approach was applied in the regression analyses. A two-sided P-value below 0.05 was considered significant. Statistical analyses were performed using IBM SPSS Statistics 23.0 (IBM Corp., Armonk, NY) and R ver. 3.2.2 (R core team, R Foundation for Statistical Computing, Vienna, Austria). Population From October 2007 to March 2010, 41 patients were enrolled in this study at four medical institutes in Japan (Fig. 1). One patient was excluded from the ITT population because of entry criteria violation (tumor size <2 cm). Six patients were further excluded from the per-protocol set (PPS) due to entry criteria violation in three patients (higher transaminase in one, age less than 60 years in two patients), changing hospitals during the protocol treatment in one patient, and insufficient duration (<90%) of drug administration in two patients. Baseline characteristics of the entire population (safety population), the ITT population, and the PPS population are shown in Table 1. Clinical and pathological response The clinical response rate in the ITT population was 67.5% (52.0-80.0%), which was above the prespecified threshold Response rates in the HER2-negative and HER2-positive subgroups were 60% and 80%, respectively, which showed no statistically significant difference. Changes in Ki-67 LI were assessed based on clinical responses (Fig. 2). Ki-67 LI decreased after treatment in both responders and nonresponders, and no difference in the decrease was observed based on clinical response. Surgical outcome Among patients in the ITT population, breast-conserving surgery was performed in 30 patients, and the breastconserving rate was 75% (30/40). Before treatment, breastconserving surgery and total mastectomy were anticipated in 12 and 28 patients, respectively. Eighteen patients who were anticipated to receive mastectomy before neoadjuvant treatment received breast-conserving surgery. The conversion rate from total mastectomy to conserving surgery was 64% (18/28). Safety Adverse events occurring in all enrolled patients are shown in Table 2. Twenty-two patients (54%) had leukocytopenia, but most (17/22) of them were grade 1. Grade 3 leukocytopenia was observed in one patient. The most common nonhematological AE was arthralgia, which was observed in six patients. One patient was diagnosed with liver cancer 3 months after initiation of the neoadjuvant treatment, and the causal relationship with treatment is unlikely. No Survival analysis Survival analyses were performed in the PPS. Among 34 patients, postoperative chemotherapy was given to seven patients. The median follow-up period was 68.5 months (range: 18.1-86.5). DFS at 5 years was 90.9% (95% CI: 48.4-90.4%). Three patients relapsed during follow-up, one with axillary lymph node recurrence, one with chest wall recurrence, and one with lung metastasis. Overall survival at 5 years was 93.9% (95% CI: 74.4-97.0%). Two patients died during followup, one with liver cancer and the other with myocardial infarction 3 years after treatment initiation. Baseline factors including T stage, nodal involvement, HER2 status, and types of surgery were not associated with DFS. Associations of survival with clinical response and AEs were evaluated. Clinical response with US was associated with prognosis; responders showed better DFS than nonresponders (P = 0.020) (Fig. 3). Interestingly, leukocytopenia was associated with prognosis; patients with no or mild leukocytopenia (G0 or 1) had better DFS than those with severe leukocytopenia (G2 or 3) (P = 0.003) (Fig. 3). No other factors including pre-and post-treatment Ki-67 LI, changes in Ki-67 LI, or HER2 status were associated with DFS or OS. Circulating endothelial cells Circulating endothelial cells were quantified prior to and during the neoadjuvant therapy. Their association with clinical response with US and prognosis were evaluated. In nonresponders, CEC counts were significantly increased at 8 weeks (P = 0.004) compared with pretreatment counts, while in responders, no such increases were observed (P = 0.35) (Fig. 4). Similarly, CD34-positive CEC counts were increased at 8 weeks in nonresponders (P = 0.003) but not in responders (P = 0.39). Baseline counts of CEC and CD34-positive CEC did not correlate with treatment response. The association between CEC counts and prognosis was evaluated. Cut-off values for CEC and CD34-positive CEC were determined using the Youden's index of ROC curves. Baseline counts of CEC and CD34-positive CEC were significantly associated with DFS, and patients with higher counts of CEC and CD34-positive CEC showed worse prognosis than those with lower counts (P < 0.001 and P = 0.004, respectively) (Fig. 5A). In addition, posttreatment counts of CEC and CD34-positive CEC were also significantly correlated with DFS (P = 0.014 and P = 0.008, respectively) (Fig. 5B). Because clinical response and leukocytopenia were also associated with DFS, multivariate analyses of DFS, including clinical response, leukocytopenia, and pre-and posttreatment counts of CEC, were performed. Interestingly, post-treatment counts of CEC, but not pretreatment counts, were independently correlated with DFS (P = 0.046) ( Table 3). A similar result was observed for CD34-positive CEC (P = 0.043) ( Table 3). Discussion In this study, we demonstrated that neoadjuvant metronomic chemo-endocrine therapy with letrozole and cyclophosphamide showed a good response in Japanese postmenopausal women with ER-positive breast cancer, with a conversion rate from mastectomy to breastconserving surgery of 64% and tolerable toxicity. In addition, increases in CEC counts at week 8 indicated poor response, and post-treatment CEC counts showed a good and independent prognostic value. One of the advantages of neoadjuvant treatment is an increase in breast-conserving rate. The IMPAKT trial, which compared anastrozole, tamoxifen, and both in combination in the neoadjuvant setting, showed that the conversion rates from mastectomy to breast-conserving surgery were 44%, 31%, and 24%, respectively [6]. Thus, the conversion rate achieved in this study (64%) was higher than the rate in any endocrine treatment group of the IMPAKT trial. Another neoadjuvant endocrine study, the PROACT trial, compared anastrozole and tamoxifen [1]. Although the conversion rate was not reported, the improvement in surgery including the conversion from mastectomy to breast-conserving surgery was observed in 38.1% and 29.9% of the patients who received anastrozole and tamoxifen, respectively. Altogether, these results suggest that the combination of letrozole and cyclophosphamide would give a higher conversion rate than endocrine therapy alone. In nonresponders, CEC counts increased at week 8, while in responders, such an increase was not observed. In our previous study, we showed that CEC counts done with the CellSearch system increased during neoadjuvant chemotherapy, especially during therapy involving taxanebased regimens [17]. Such increases have been suggested to contribute to angiogenesis and neovascularization in order to repair damaged tissues, including normal and cancerous tissues [21][22][23]. Metronomic chemotherapy is expected to prevent such a vascular rebound in neovascularization, especially in tumor tissues, which is one of the suggested mechanisms for its anticancer effect. Therefore, it is conceivable that prevention of neovascularization due to metronomic chemotherapy led to maintained CEC counts in responders, while failure of such prevention resulted in increased CEC counts in nonresponders. In our study, although CEC counts at both baseline and post-treatment showed prognostic value, only posttreatment CEC counts had independent prognostic power. Poor prognosis in patients with high post-treatment CEC counts may be a result of insufficient anti-angiogenic response with metronomic chemo-endocrine therapy. This result seems consistent with the prognostic value of posttreatment Ki-67 LI in NET, which showed better prognostic power than pretreatment Ki-67 LI [24][25][26][27]. Our results along with other reports suggest that biological responses, such as the antiproliferative response indicated by Ki-67 LI and anti-angiogenic response indicated by CEC counts after metronomic therapy, show more precise prognostic value than the baseline biology of tumors. Leukocytopenia was associated with prognosis in this study. Severe leukocytopenia (G2 or G3) was associated with worse DFS. This seems contradictory to results reported with conventional chemotherapy in adjuvant settings for early-stage breast cancer [28][29][30]. These previous studies indicated that severe myelosuppression was associated with better prognosis in patients with breast cancer receiving CMF (cyclophosphamide, methotrexate, and 5-fluorouracil) or CAF (cyclophosphamide, doxorubicin, and 5-fluorouracil), suggesting that hematological toxicity due to conventional chemotherapy may represent biological activity of the drugs, resulting in improved prognosis. However, metronomic chemotherapy has been suggested to exert anticancer effects via different mechanisms of action compared to conventional chemotherapy, one of which is activation of antitumor immune response. Indeed, low-dose cyclophosphamide has been implicated in activation of innate immunity [31][32][33]. Therefore, myelosuppression during metronomic treatment may lead to insufficient immune activation, which might result in poor treatment efficacy in patients with severe leukocytopenia. Although the objective response rate (67.5%) in our study appears a little lower than that (87.7%) in a previous report by Bottini et al. [14], some differences exist between the two studies. Bottini's study included only elderly patients, and thus, median patient age in our study was lower in comparison. More than half of the patients in Bottini's study had histological grade 3 tumors, while none of the patients in our study had grade 3 tumors. The clinical response was assessed using calipers in Bottini's study, while it was assessed with calipers, US, and CT/ MRI in our study. These differences might have contributed to different response rates in the two studies. This study was limited in terms of some parameters. One of its biggest limitations was its small sample size. Because this was a phase II trial investigating clinical efficacy and tolerability of combined treatment with letrozole and low-dose cyclophosphamide, the sample size was set at 40. In order to validate the clinical utility of the treatment, a larger study is warranted. It is also important to interpret the results including the prognostic analysis with this sample size cautiously. To confirm the prognostic value of CEC, it is necessary to conduct a larger study in which CECs are serially measured. The definition of ER and PgR positivity is another issue. In 2010, the American Society of Clinical Oncology/College of American Pathologists recommended that ER and PgR assays be considered positive if there are at least 1% positive tumor nuclei in the sample on testing with appropriate controls [34]. Because this study started in 2007, the old criteria of ER and PgR were used. Thus, future studies to validate our results should be conducted with the new definition of ER and PgR positivity. Another limitation was that this was a single-arm study in which chemo-endocrine therapy was not compared with either endocrine therapy alone or metronomic chemotherapy alone. It is, therefore, not clear whether combined administration of letrozole with metronomic cyclophosphamide resulted in a better outcome than letrozole or cyclophosphamide alone would have in this population. A randomized controlled study would be required for such a comparison in a larger confirmative study. In conclusion, metronomic chemo-endocrine therapy with letrozole plus cyclophosphamide showed a good response and was tolerated in Japanese postmenopausal patients with ER-positive breast cancer. An increase in CEC counts during the treatment was associated with poor response, and post-treatment CEC counts as well as clinical response were independent prognostic factors. The combination of letrozole and cyclophosphamide could be an option for postmenopausal women with ER-positive breast cancer. CEC quantification would be a promising tool for treatment monitoring and prognostic stratification following validation of our results in larger prospective studies. Acknowledgment We thank all the patients who participated in this study. We are grateful to all of the office members in JBCRG
v3-fos-license
2019-11-19T14:04:38.279Z
2019-01-01T00:00:00.000
208143726
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/10717544.2019.1682720?needAccess=true", "pdf_hash": "da670028308145c6e78cf3979f8669b070a3e6ce", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:54", "s2fieldsofstudy": [ "Biology", "Engineering" ], "sha1": "2b72c4ed78b8a8b2696c0997950d143875b08f04", "year": 2019 }
pes2o/s2orc
Eprinomectin nanoemulgel for transdermal delivery against endoparasites and ectoparasites: preparation, in vitro and in vivo evaluation Abstract Nanoemulgels are composed of O/W nanoemulsion and hydrogels and are considered as ideal carriers for the transdermal drug delivery because these have high affinity to load hydrophobic drugs. The stable formulation of eprinomectin (EPR) is very challenging because of it is high hydrophobic nature. In this work, we have prepared EPR loaded nanoemulgel for the treatment of endo- and ectoparasites. The surface morphology of optimized formulations was characterized by scanning electron microscopy. Additionally, skin permeability and irritation tests were conducted for in vitro safety and in vivo skin retention and pearmeation test of EPR nanoemulgel were conducted for efficacy study. Obtained results indicated that the optimized formulation had good shear-thinning behavior, bioadhesiveness properties, and are nanosized droplets with porous internal structure, which are required for topical application. Furthermore, this formulation has showed good skin permeability in comparison to suspension and has no skin irritating property. Overall, the obtained results proved that nanoemulgel is a promising carrier for transdermal drug delivery and EPR nanoemulgel is a promising formulation for the treatment of endo- and ectoparasites. Introduction Hydrogels, for its three-dimension cross-linked network structure, are well known as excellent carriers for drug-loading. The mechanism of drug release from the matrix is determined by the physicochemical properties of the hydrogel and the method of drug-loading. For example, swellingcontrolled release mechanism always occurs in the process of small molecule drugs release from HPMC tablets (Lin & Metters, 2006). Additionally, hydrogels have good rheological and bio-adhesive property and are biocompatible, with these unique characteristic features, hydrogels are consider as a better carrier for topical drug delivery (Gao et al., 2016). However, the hydrophilicity of hydrogel limits the applications for the delivery of hydrophobic drugs. To overcome this shortfall, a novel transdermal delivery system termed emulgel was developed, where emulsion was thickened by hydrogel. Hydrophobic drugs were loaded in the oil cores and the droplets of emulsion are entrapped into the hydrogel cross-linked network. Upon application, loaded drug in the internal phase pass through the external phase to the skin and slowly get absorbed (Ajazuddin et al., 2013). In order to improve the retention ability, bioadhesive polymers such as carbomer 934, carbomer 940, hydroxypropylmethylcellulose, are used as gel forming agents, which can influence the bioadhesive behavior of emulgel (Djekic et al., 2019). The drug release pattern and quantity from this emulgels are influenced by the types of gelling agents, concentration of emulsifier and oil base used in emulsion (Mohamed, 2004). By controlling emulsion bases in emulgels, the bioadhesive properties and sustained drug release pattern from emulgels can be controlled for prolonged therapeutic effect at desired site of action (Palcso et al., 2019). The viscosity of emulgels is lower in comparison to hydrogels which makes it easy to spread at the site of application. Gel forming agents in hydrogels exhibit multiple functions such as thickeners (increase the viscosity of formulation) and as emulsifiers (decrease the interfacial tension) to enhance the stability of emulgels (Bonacucina et al., 2009;Javed et al., 2018;Sohail et al., 2018). Thus, emulgels have been researched for the delivery of hydrophobic drug such as propolis, cyclosporine A, and amphotericin B (Shen et al., 2015;Balata et al., 2018;Pinheiro et al., 2019). From various experimental finding regarding emulgel delivery system, it was found that nanosized (generally range from 20 to 500 nm) emulsion gained the advantages of large surface area which allows rapid penetration through the pores to reach the systemic circulation. Drug from nanoemulgel can permeate the skin through both transcellular and paracellular route, while nanoemulsion deliver the drug through transcellular route only (Rambharose et al., 2017;Sengupta & Chatterjee, 2017). It has been found that parasites always have negative effects on the growth and fertility of domestic animals. For instance, the heartworm disease in cats may present with clinical signs such as coughing, vomiting, dyspnea or severe respiratory distress, and gastrointestinal nematode infections in cows would also cause the loss of milk production and reduction of quality (May et al., 2017). The worse thing is that, for the close contact between human and domestic animals, humans would be easily infected, which poses a threat to public's health (Tielemans et al., 2014). Eprinomectin (EPR; 4 00 -epi-acetylamino-4 00 -deoxy-avermectin B1) is a hydrophobic 16-membered macrocyclic lactone developed by Merck research laboratories. Other than its great therapeutic efficacy against endo-and ectoparasites such as gastrointestinal nematodes, horn fly, and lice (Shoop et al., 1996;Cringoli et al., 2003), it was proved to be safe to lactating dairy animals (Baoliang et al., 2006). The marketed EPR formulation includes topical formulation (EPRINEX V R Pour-on, Merial) and injectable formulation (Eprecis V R 20 mg/mL, Ceva) (Hamel et al., 2018;Termatzidou et al., 2019). However, the Pour-on formulation has weak adhesion and it is administered by pouring along the animals' back line (Hamel et al., 2018), which may lead to short skin contact time and decrease of efficacy. Injectable formulation could only be used by technicians who have been trained professionally. To meet the requirements of long-term treatment and topical use, we have developed EPR nanoemulgel consisting of nanoemulsion and hydrogel with good bioadhension and better permeability than conventional emulgel and suspension. Preparation of EPR nanoemulsion To prepare EPR nanoemulsion, first non-aqueous phase was prepared by dissolving 0.55 g EPR into 1 mL of ethanol with the help of ultrasonic bath followed by mixing it with castor oil in 40 C water bath. Then after, mixture was placed into suitable sized round bottom flask and was fixed on rotary evaporating at 38 C (RE-52AA, Shanghai Yarong Biochemistry Instrument Factory, Shanghai, China) for 20 min to remove the ethanol. Second, 10 g aqueous phase was prepared by mixing (0.22, 0.33, and 0.44 g) Tween 80 and (0.22, 0.33, and 0.44 g) Labrasol V R into water. Then, the aqueous phase was slowly added into non-aqueous phase with continuous stirring on magnetic stirrer (85-2, Shanghai Sile Instrument Co., Ltd, Shanghai, China) followed by homogenization at 8000 r/min (XHF-D, Ningbo Scientz Biotechnology Co., Ltd, Ningbo, China) for 6 min to get the primary emulsion. Further, this primary emulsion was passed though highpressure homogenizer at 350 bar for three cycles (AMH-3, ATS Nano Technology Co., Ltd, Suzhou, China) to get the final O/W EPR nanoemulsion (Figure 1(A)). Preparation of EPR nanoemulgel To prepare EPR nanoemulgel, first hydrogel matrix was prepared by soaking (0.05, 0.1, and 0.2 g) carbomer 940 into water to form 10 g carbomer gel matrix. Then after, 1 g EPR nanoemulsion (EPR 0.05 g) was added to the gel matrix with slow continuous stirring. The final EPR nanoemulgel was obtained by adjusting pH to 6-7 with the addition of sodium hydroxide aqueous solution (Figure 1(B)). At the same time, the hydrogel incorporated with EPR emulsion was EPR emulgel. Optimization of EPR nanoemulgel Each composition in the formulation should be studied in order to investigate their individual effects on the EPR formulation (such as viscosity of nanoemulgel and Ke of nanoemulsion). Thus, the effect of castor oil (0.40, 1.19, and 2.00 g), Tween 80 (0.22, 0.33, and 0.44 g), and Labrasol V R (0.22, 0.33, and 0.44 g) was studied. Effect of one composition at one time was studied. Stability test (determination of Ke) The stability test of the prepared emulsion was performed by centrifugation method, which is important for the homogenous nanoemulgel (Mouri et al., 2016). For this, 5 mL of the nanoemulsion was placed in a 10 mL centrifuge tube and was centrifuged at 4000 r/min (KDC-140HR, Anhui USTC Zonkia Scientific Instrument Co., Ltd, Hefei, China) for 15 min. After centrifugation, about 25 lL of samples from the bottom of the test tube was taken out accurately by a micro-pipette (50 lL) and diluted to 10 mL with distilled water. Distilled water was adopted as a blank to measure the absorbance at a visible wavelength of 500 nm. A sample of 25 lL from uncentrifuged nanoemulsion was obtained by the same method. The stability constant Ke was calculated as Equation (1): where, A 0 was the absorbance of the diluted nanoemulsion without centrifugation at the set wavelength; A was the absorbance of the diluted nanoemulsion after centrifugation at the same wavelength. Due to the difference in density between the oil and aqueous phases, the oil droplets will float during the centrifugation process, which lead to change of absorbance before and after centrifugation. In this case, smaller stability constant Ke indicates nanoemulsion is more stable. Rheological test The viscosity of variable nanoemulgels was measured by a rheometer (DV-III Ultra, Brookfield, Middleboro, MA) using SC4-16 rotor at room temperature (25 C) with the shear rate changed from 20 to 100 r/min. Droplet size analysis and zeta potential The optimized EPR nanoemulsion was diluted with water (1:1000, v/v) and the droplet size and zeta potential of diluted EPR nanoemulsion was measured by zeta sizer (Brookhaven Instruments Corporation, Holtsville, NY). EPR emulsion was characterized for zeta size and zeta potential in the same way. Morphology investigation The internal three-dimensional structures of the optimized EPR nanoemulgel, optimized blank nanoemulgel and 1% carbomer hydrogel was observed by a scanning electron microscope (SEM; Su8020, Hitachi, Tokyo, Japan). For SEM investigations, the samples were prepared by freezing rapidly in liquid nitrogen followed by freeze-drying for 48 h (Al-Abboodi et al., 2019). Bioadhesion evaluation The adhesive property is critical for formulations which are intended to apply topically. This property can determine about the contact duration of the applied formulation on skin. This test can be evaluated by a number of test methods such as peel force test, adhesive strength test, and tack test (Al Hanbali et al., 2019). In this experiment, the bioadhesive strength of the formulations was quantified by a self-made device ( Figure 2; Xiang et al., 2002). To perform this test, healthy mice were taken and were sacrificed by cervical dislocation. Hairs from mid abdominal sides were shaved and the skin was quickly peel out to remove the subcutaneous fat and tissue. The obtained skin was preserved at -20 C for further use. The preserved skin was thawed at 37 C and was cut into two pieces of 1.8 cm  1.8 cm size and were adhered on the surfaces of slide 'a' and 'b' with the help of cyanoacrylate adhesive, respectively. Here, the EPR nanoemulgel containing 0.05, 0.1, and 0.2 g carbomer were named as nanoemulgel A, B, and C, respectively. Appropriate amount of nanoemulgel from A, B, and C with varied viscosity was weighed, respectively, and applied on the skin surface. Then after, 0.3025 N/ cm 2 pressure was applied on the slide 'a' for 10 s in order to make the two slides contact closely. As shown in Figure 2, slide 'a' was fixed and slide 'b' was attached with a plastic bag. Distilled water was dropped from infusion bottle through attached tube at the rate of 2 drops/s into the plastic bag. The detachment point of both slides was observed due to water weight into the plastic bag, once both slides started to detach, water dropping into the plastic bag was stopped and the bioadhesion force per unit area was calculated by using Equation (2) as follow: where, f represents the bioadhesion force per unit area (g/ cm 2 ); m is the total mass of water applied on slide 'b' (g); and r represents the size (1.8 cm) of the skin taken to apply the gel. Eprinomection quantification EPR concentration was quantify by HPLC using a LC-20AT Shimadzu system with autosampler (Shimadzu Corporation, Kyoto, Japan) and a UV detector (Shimadzu, SPD-20A UV/Vis Detector, Kyoto, Japan) at 245 nm. A Kromasil ODS column, 5 lm (250 mm  4.6 mm) was used for separation at the column oven temperature of 30 C. The mobile phase used was comprised of 75% acetonitrile and 25% water at the flow rate of 1 mL/min. Initially, series of known concentration of standard solutions ranging from 0.03 to 10 lg/mL were run to draw calibration curve to quantify the concentration of EPR in test samples. In vitro permeation study In vitro permeation study was conducted to evaluate the permeability of EPR from different formulations. For this, Franz diffusion cell system with a 2.27 cm 2 effective diffusion area (TPY-2, Shanghai Huanghai Pharmaceutical Inspection Instrument Co., Ltd, Shanghai, China) was used. Skin without any defects was mounted between donor and receptor compartments in the diffusion cell, with the epidermis side facing the donor compartment. The receptor compartment was filled with a 6.5 mL receptor buffer (normal saline containing 30% methanol). After that, EPR suspension, EPR emulgel, and EPR nanoemulgel containing same quantity of EPR were put separately in the donor compartment. Then the temperature in the diffusion chamber was maintained at 37 ± 0.5 C in a thermostatic water bath. Samples from receptor chambers were collected at predetermined time intervals (1, 2, 4, 6, 8, 10, and 12 h) with the replacement of same volume of fresh aerated receptor buffer. The solutions were filtered through membrane filter (0.22 lm) and were run on HPLC to quantify the content of EPR at each time interval. The cumulative penetration Q was calculated by the following formula: where, C n is the concentration of EPR in the receptor solution at the nth sampling point; C i is the concentration of EPR before nth sampling point; V 0 is the volume of receptor solution (6.5 mL). The apparent permeation coefficient (P app , cm/s) of PRZ was calculated as follows: where, C 0 is the initial EPR concentration in donor chamber; A is the effective diffusion area (2.27 cm 2 ); ᭝Q=᭝t is the slope of the straight line portion on Q nt plot. The steady-state flux (J ss , lg cm À2 h À1 ) was determined by: Cytotoxicity The toxicity level of EPR suspension and EPR nanoemulgel was investigated by MTT assay method using L929 mouse fibroblasts cell line. Briefly, the L929 fibroblast cells were seeded into 96-well plates with cell densities of 6  10 3 and incubated in RPMI 1640 with 10% FBS at 37 C with 5% CO 2 . Both EPR suspension and EPR nanoemulgel were diluted with serum free RPMI-1460 medium in the same EPR concentration ranging from 1 to 10,000 lg/mL with the multiplication of 10. After 24 h of cell incubation, culture medium from each well was gently aspired using suction pump and test samples were added into each well. Plates were re-incubated for another 24 h in same chamber condition. After 24 h, the medium from each well was discarded and 200 lL of 0.5 mg/ mL MTT solution was added into each well. Then, the media were removed and MTT formazan crystals were dissolved in 150 lL dimethyl sulfoxide. Finally, the absorbance was measured on a microplate reader at 570 nm and cell viability of each samples was calculated using Equation (6) Skin irritation test Skin irritation test was conducted to determine the irritation safety level of the EPR nanoemulgel. In some cases, parasites would cause skin lesion (Steelman, 1976;Goldberg & Bursey, 1991), so that it is necessary to investigate if there is any irritation of preparation to the damaged skin other than to the intact skin. For this, intact skin and damaged skin model were prepared according to the previous study (de Oliveira et al., 2014). Briefly, 15 healthy male standard deviation (SD) rats (weight 180 $ 220 g) were shaved on the back and were kept under observation for 24 h to check any damage on the skin appear. In case, any damage was found on skin of any rat, was not involved in study. After 24 h of observation, shaved area on the back of rats was divided into two regions, one for intact and other for damage. To develop damaged region, scalpel was used to make abrasion on the skin with slight bloody appearance, as shown in Supplementary Figure S1. After that, the rats were divided into five groups randomly, for control saline, blank nanoemulgel, EPR nanoemulgel, blank nanoemulsion, and EPR nanoemulsion. The preparations were applied on the both regions at 2 cm  2 cm area. All the animals were provided with same ad libitum access at 25 C and 55% humidity room condition. All the animals were monitored for presence of erythema and edema at 1, 24, 48, and 72 h time interval and the symptoms were scored based on the visibility of symptoms as listed in Supplementary Table S1 and S2 (Jia et al., 2008). Average stimulation score ¼ R(sum of erythema scores þ sum of edema scores)/number of animals (Al Hanbali et al., 2019). After completion of 72 h study, the rats were sacrificed by cervical dislocation, and the skin samples from the test area were collected from the center part with area of about 1 cm  1 cm. The collected samples were preserved in 4% polyformaldehyde at 4 C. The skins were embedded in paraffin and then sectioned into 5-lm thick sections and stained with H&E dye. The histological changes were imaged with a light microscope (Nikon, Tokyo, Japan). Skin retention and in vivo permeation study The skin retention and in vivo permeation of EPR formulations were studied to evaluate the drug permeability through skin. FITC was used as a model dye in the place of EPR this experiment. Different formulations (FITC nanoemulgel, FITC emulgel, and FITC suspension) were prepared (at the concentration of 0.04% FITC w/w) in the same manner as EPR formulations For in vivo study, ICR mice were randomly divided into four groups, and their hair on the back was removed with 8% sodium sulfide 12 h before application. Four groups of mice were applied with FITC solution, FITC nanoemulgel, FITC emulgel, and FITC suspension, respectively. The applied area was approximately maintained at 2 cm  2 cm. The mice were sacrificed at 1, 4, and 8 h, and the skin was washed with normal saline and 1 cm  1 cm area was taken out for tissue sections and was observed by fluorescence microscope (Nikon Eclipse C1, Nikon, Tokyo, Japan). Statistical analysis The results are expressed as mean ± SD, with the data being statistically analyzed for the Student's t test (two groups) and one-way analysis of variance (ANOVA, multiple groups). The differences were considered statistically significant at p < .05. Statistical analysis was performed by SPSS 22.0. Results and discussion Single factor design of experiment A single factor experimental design was employed to statistically optimize the formulation parameters of EPR nanoemulgel for better stability. Effect of the non-aqueous agent From Figure 3(A,B), it can be seen that the content of castor oil has no significant effect on viscosity of EPR nanoemulgel but has significant effect on Ke. The Ke values of EPR nanoemulsion which contain 0.40 and 1.19 g castor oil were close to 0, whereas the Ke of 2.00 g castor oil group was 2.174, which implied that the nanoemulsion system would be unstable when the content of non-aqueous agent is over a critical value. Therefore, the content of non-aqueous agent, i.e. castor oil selected for this formulation was 1.19 g (10.8% w/w in nanoemulsion). It was found that the viscosity of nanoemulgel would not be influenced by the content of Tween 80. However, EPR nanoemulsion containing 0.11 and 0.22 g of Tween 80, respectively has higher (p < .01) Ke values compared to the other two groups, this indicated the formation of weaker bond and can be easily separated into two layers. The reduction of emulsifier's content leads to the lowering of emulsification effect which further causes easy aggregation of oil droplets. To decrease the skin irritation caused by surfactant, the formulation contains less surfactant was preferred. In this case, the optimized EPR nanoemulsion contains was 0.33 g Tween 80 (3% w/w in nanoemulsion). Figure 3(E) showed the effect the co-emulsifier. As can be seen from figure, the increase in content of Labrasol V R from 0.22 to 0.44 g, leads to higher viscosity of EPR nanoemulgel, this may be due to decrease of HLB value, which strengthen the interaction between hydrophilic polymer and oil droplets. The EPR nanoemulsion contains 0.44 g Labrasol V R exhibited the worst stability. From the design of experiment, it was found that the optimum quantity of Labrasol V R was 0.33 g at which the formulation showed good rheological characteristics and permeability. Effect of carbomer on viscosity of nanoemulgel As can be seen in Figure 3(G), Carbomer has direct effect on the viscosity of nanoemulgel, increasing the concentration of Carbomer from 0.05 to 0.2 g can proportionally increases the viscosity. In general, formulation with higher viscosity is beneficial for during application on skin because it retains on the skin for longer period. However, too high viscosity leads to the inconvenience in use, because it cannot spread sufficiently and also it retards the movement of loaded drugs due to presence of higher cross-linked chains (Jain et al., 2016). In this case, the optimized concentration of Carbomer was fixed to 0.1 g. Rheological properties The results presented in Figure 3 show the relationship between viscosity and shear rate. Here, all the prepared EPR nanoemulgels exhibit non-Newtonian shear-shinning behavior, which originated from the broken of entanglements or physical nodes that are responsible for raising the viscosity of nanoemulgel. Then, the polymer chains realign in the direction of the applied strain, thus lead to the lowering of viscosity (Barradas et al., 2017). This shear-thinning behavior is suitable in transdermal delivery system for the nanoemulgel, which would not drip on the finger or at the application site. What is more, it makes it easy to spread on the skin surface uniformly. Zeta size analysis and zeta potential The zeta size analysis showed that the optimized EPR nanoemulsion has average droplet size of 327.43 ± 10.35 nm at 25 C with PDI 0.183. In addition, the average zeta potential of optimized EPR nanoemulsion was -29.10 ± 1.27 mV, which indicates that the nanoemulsion was stable. At the same time, emulsion that prepared for emulgel has average droplet size of 1154.54 ± 296.08 nm with PDI 0.332 and its zeta potential was -44.06 ± 1.28 mV (Supplementary Figure S2). Morphology analysis The physical appearance and FE-SEM images of 1% carbomer hydrogel, blank nanoemulgel and EPR nanoemulgel were shown in Figure 4. As shown in figure, 1% carbomer hydrogel is transparent while blank nanoemulgel and EPR nanoemulgel are white semisolid due to the incorporation of nanoemulsion. All the formulations were homogeneous, stable, and viscous (Figure 4(A-C)). The SEM images of 1% carbomer hydrogel looks like a tiny mesh pores whereas the images of blank nanoemulgel and EPR nanoemulgel exhibited interconnected pores with random size distribution. This porous structure provides sufficient space for high drug loading, movement of drug throughout and enhance the drug release rate. Bioadhension of different EPR emulgels The concentration of carbomer is directly proportional with the adhesion property of the EPR nanoemulgel as can be seen from the Table 1. The 'f' value of nanoemugel B and C are significantly differ (p < .01) from nanoemugel A. Mechanism of mucoadhesion have been explained by several theories such as electronic theory, wetting theory, and so on. Bioadhesion is a critical factor in transdermal drug delivery systems (TDDS), since the drug absorption is related to the drug partition between the TDDS. Generally speaking, a constant TDDS/skin contact over the application period allows a consistent drug delivery and absorption (Banerjee et al., 2014). Here, the bioadhesion of EPR nanoemulsion was tested as control, and it exhibits almost no bioadhesion to the skin so that it could not be suitable for longterm treatment. In vitro permeation of EPR nanoemulgel In vitro experiments on transdermal penetration were carried out to investigate the effect of emulgel and nanoemulgel on transdermal absorption of EPR. The results showed that EPR permeated through the skin at a constant rate and the diffusion behaviors were in accordance with zero-order kinetics (Supplementary Figure S3). The cumulative permeated amount of EPR from different formulations is listed in Table 2. Comparatively, nanoemulgel and emulgel significantly enhance (p < .01) the permeability of EPR by 8.07 fold and 5.57 fold compared to suspension, which indicates that the nanoemulgel has better absorption than suspension as a result of multiple factors including skin hydration, the loose of stratum corneum lipid bilayers by surfactant and the penetration of oil droplets through stratum corneum (Ajazuddin et al., 2013;Sengupta & Chatterjee, 2017). Additionally, the permeation rate of nanoemulgel was significantly faster than emulgel by 1.45 fold due to the larger surface for transfer (Hu et al., 2019). Cytotoxicity The cytotoxicity of EPR formulations were evaluated by MTT assay using L929 fibroblasts cell line. The results shown in Supplementary Figure S4 revealed that both EPR suspension and nanoemulgel possess no obvious cytotoxicity at the concentration ranges from 1 lg/ml to 10 mg/ml and there was no significant difference (p > .05) between the two groups. According to the available information in the FDA Inactive Ingredient Search for Approved Drug Products, the maximum amount of Tween 80 in gel is 8.5% w/w, Caprylocaproyl Macrogolglycerides (Labrasol V R ) in cream 7.5% w/w, castor oil in ointment is 14.9% w/w and carbomer 940 in gel is 1.1% w/w for topical use. It suggested that EPR nanoemulgel consisting of 0.27% w/w Tween 80, 0.27% w/w Labrasol, 0.98% w/w castor oil, and 1% w/w carbomer 940 can be a promising and safe candidate for transdermal application. Skin irritation test The safety of the EPR nanoemulgel was further evaluated to ensure that it has low skin toxicity and causes negligible irritation to the skin. The average skin stimulation scores of Supplementary Table S3 indicated that there is no irritation observed in blank nanoemulsion, blank nanoemulgel, EPR nanoemulsion, and EPR nanoemulgel on both intact and damaged skin. Moreover, in order to eliminate the limitation of visual acuity, the histological analysis is preferable to observe if there is any possible changes on the skin. The histopathological results ( Figure 5(A)) indicated that the intact skin applied with normal saline has no inflammation, the epithelial cells were arranged neatly, the collagen fibers in the dermis were arranged regularly, and the accessory organs such as the hair follicle sebaceous glands were observed without any damages. In the damaged skin group, mild edema was observed in the upper part of the dermis, and a single lymphocyte infiltration was observed ( Figure 5(B)). As for intact skin applied with different EPR formulations, no obvious inflammation was observed, indicating that the formulations were not irritating to the intact skin of the rats ( Figure 5(C)). A small amount of inflammatory cell infiltrations was found in the skin of control groups, while the damaged skin administered with EPR nanoemulsion and the EPR nanoemulgel was intact and exhibited no significant inflammation. It was reported that avermectin and its derivative ivermectin exerted anti-inflammation effect (Ci et al., 2009;Yan et al., 2011). As the EPR is the derivative of avermectin, it may also have the anti-inflammatory effect. The overall results indicated that the EPR nanoemulgel were within the limit of the skin tolerance and safe to use in transdermal applications. In vivo skin retention and permeation study The result of in vivo skin retention and permeation study is shown in Figure 6. As explained in method, FITC was used as a model drug to replace EPR to prepare FITC nanosemulgel and emulgel. The FITC solution was kept as control group. FITC concentration was maintained same in all samples and was applied on the skin of animals. After 8 h, skins were imaged under fluorescence microscope; the obtained images showed almost no penetration of FITC into the skin of animals applied with FITC suspension. This may be due to lower contact duration with applied skin because of weaker bioadhesion property of suspension. Compared to the liquid preparation, FITC emulgel and nanoemulgel possess better bioadhesion property and prolonged retention time. After application of 4 h, penetration of FITC was seen into the dermis region of skin and was more prominent at 8 h, which indicates emulgel can promote the transdermal delivery of the drug. Similarly, in the nanoemulgel group, drug loaded into nano oil droplets have larger surface area to move around and have high tendency to diffuse out from the matrix. As can be seen from the images of FITC nanoemulgel, the penetration of FITC started just after 1 h application and gradually increases with the passes of time. Comparing 8h images from all groups, it is clear that the nanoemulgels have high capacity to deliver loaded drugs into the dermal region of the skin in comparison to emulgel and liquid preparations. From this, it can be concluded that EPR loaded nanoemugel can retain for longer period on the skin and can release sufficient amount of drug to provide prolonged therapeutic local action. Conclusion This study reported the synthesis and characterization of nanoemulgel as a promising drug carrier to load and deliver a lipophilic antiparasitic drug, EPR. In vitro cytotoxicity study and skin irritation test revealed that the concentration of pharmaceutical excipients used in the formulation is well under tolerate amount and safe. Due to good bioadhensive property compared to liquid medications, this nanoemulgel retain for prolonged period at applied area. The presence of hydrogel matrix and O/W emulsion system in nanoemulgel, made it a unique TDDS. O/W emulsion system provided sufficient non-aqueous surface to load hydrophilic drug and the hydrogel matrix help to trap nanosized drug loaded droplets for prolonged period. In conclusion, this EPR loaded nanoemulgel is safe and promising TDDS for the treatment of endo-and ectoparasites.
v3-fos-license
2018-08-21T22:42:17.682Z
2018-09-01T00:00:00.000
52041496
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://wwwnc.cdc.gov/eid/article/24/9/pdfs/17-1957.pdf", "pdf_hash": "1d64004d0252665bf33c9535a8fd4178d4472773", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:56", "s2fieldsofstudy": [ "Medicine" ], "sha1": "ec7f1a0a0c4c4d819299f26ef8e4b59adcc96549", "year": 2018 }
pes2o/s2orc
Spondweni Virus in Field-Caught Culex quinquefasciatus Mosquitoes, Haiti, 2016 Spondweni virus (SPONV) and Zika virus cause similar diseases in humans. We detected SPONV outside of Africa from a pool of Culex mosquitoes collected in Haiti in 2016. This finding raises questions about the role of SPONV as a human pathogen in Haiti and other Caribbean countries. Spondweni virus (SPONV) and Zika virus cause similar diseases in humans. We detected SPONV outside of Africa from a pool of Culex mosquitoes collected in Haiti in 2016. This finding raises questions about the role of SPONV as a human pathogen in Haiti and other Caribbean countries. S pondweni virus (SPONV) and Zika virus are closely related flaviviruses that were first described in Africa in 1952 and 1947, respectively (1). Humans infected by these viruses have similar clinical manifestations; asymptomatic infections are common, and illness is generally self-limiting (1). In the 6 documented human SPONV infections, fever occurred in all. Other symptoms included headache, nausea, myalgia, conjunctivitis, and arthralgia; only 1 SPONV-infected person had maculopapular and pruritic rash (1). The similar clinical presentations for these virus infections and reportedly high serologic cross-reactivity have resulted in frequent misdiagnosis (1). Because of the 2015-2016 epidemic of Zika fever in the Western Hemisphere and the link between microcephaly and Zika virus infection, Zika virus has been studied more comprehensively than SPONV (1). SPONV was first isolated from Mansonia uniformis mosquitoes during virus surveillance in 1955 in South Africa (2). No new reports of SPONV surfaced despite continued mosquito surveillance until 1958, when it was identified in 4 additional mosquito species, including Aedes circumluteolus, a tropical sylvatic mosquito found in Africa (2). Little is known about possible vertebrate hosts, although SPONV antibodies have been detected in birds, small mammals, and ruminants (2). In a recent study by Haddow et al., strains of Ae. aegypti, Ae. albopictus, and Culex quinquefasciatus mosquitoes were not susceptible to SPONV infection (3). We detected SPONV from a pool of 7 mixed-sex Cx. quinquefasciatus mosquitoes collected in July 2016 during ongoing arbovirus surveillance in Gressier, Haiti. During May-August 2016, we caught 1,756 mosquitoes using Biogents Sentinel traps (BioQuip Products, Rancho Dominguez, CA, USA) within a 10-mile radius in Gressier, a semirural setting. Trap locations were selected based on environmental considerations, low risk for traps being disturbed, and known human arbovirus-caused illnesses in the area (4). Trap bags were transported to a field laboratory in Haiti, where mosquitoes were frozen at -20°C, then identified by species and sexed by trained technicians using morphologic keys and identification guides (5,6). After identification, the mosquitoes were pooled by location, collection date, species (Ae. aegypti, Ae. albopictus, Cx. quinquefasciatus, and other), and sex. All pools were screened for chikungunya virus, dengue virus (DENV) serotypes 1-4, and Zika virus RNA by real-time reverse transcription PCR (rRT-PCR) (online Technical Appendix Table 1, https:// wwwnc.cdc.gov/EID/article/24/9/17-1957-Techapp1.pdf), as we previously have done with human specimens from Haiti (4). Mosquito homogenates positive by rRT-PCR were used for sequencing using primer walking and Sanger sequencing methods as previously reported (4; online Technical Appendix Table 2). In addition, we confirmed Aedes and Culex mosquito species by molecular methods (7,8). In initial screens of a pool of 7 mixed-sex Cx. quinquefasciatus mosquitoes (non-blood-fed) collected on July 4, 2016, rRT-PCR results suggested the presence of Zika virus RNA (cycle threshold value 39), but this same pool was negative for chikungunya virus and DENV RNA by rRT-PCR. After unsuccessful attempts to amplify Zika virus-specific amplicons using previously described Zika virus sequencing primers, we used an unbiased sequencing approach after treatment of virions in mosquito homogenate with cyanase (4). Because we suspected a closely related virus, we next tested random hexamers and SPONV-specific primers (online Technical Appendix Table 3), which resulted in formation of virus-specific amplicons (online Technical Appendix). Thereafter, using SPONV primers, we determined a 10,290-nt nearly complete genome and deposited it in GenBank (accession no. MG182017). The SPONV genome from Haiti shared 10,174 (98.8%) of 10,290 nt identity with a SPONV isolate from mosquitoes in South Africa in 1954 (GenBank accession no. DQ859064) and 9,958 (96.8%) of 10,287 nt identity with the SPONV Chuku strain from blood of a febrile human patient in Nigeria in 1952 (accession no. KX227369) ( Table). When compared with the Zika virus reference strain from Uganda (accession no. KY989511), a strain from Puerto Rico (accession no. KU501215), and a strain from Haiti in 2016 (accession no. MF384325), Zika virus and SPONV clearly continue to diverge because the nucleotide and amino acid identities of SPONV are less similar to more recent strains of Zika virus (Table). Few SPONV sequences have been deposited into GenBank, resulting in insufficient information to predict how and when SPONV was introduced in Haiti. In the Americas and the Caribbean, SPONV is a potential emergent arbovirus and public health threat that manifests clinically with symptoms and signs similar to those of Zika virus infection (2,9). Misdiagnosis has been documented, and it is possible that SPONV has caused human infection in Haiti but has been misidentified as infection from DENV or other arboviruses (9). Little is known about SPONV pathogenesis, host range, and vector competency, especially with vectors present in the Western Hemisphere. Our detection of SPONV in Cx. quinquefasciatus mosquitoes raises questions about the role of this species as a vector for this virus and highlights the need for ongoing surveillance for SPONV infection among humans in the Caribbean, combined with studies of potential vector populations.
v3-fos-license
2019-08-03T13:06:50.113Z
2019-08-01T00:00:00.000
199379528
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.aclweb.org/anthology/W19-4617.pdf", "pdf_hash": "82fec7ac082177a3ad3bcfb97583bebf36c4b741", "pdf_src": "ACL", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:58", "s2fieldsofstudy": [ "Computer Science", "Linguistics" ], "sha1": "82fec7ac082177a3ad3bcfb97583bebf36c4b741", "year": 2019 }
pes2o/s2orc
Translating Between Morphologically Rich Languages: An Arabic-to-Turkish Machine Translation System This paper introduces the work on building a machine translation system for Arabic-to-Turkish in the news domain. Our work includes collecting parallel datasets in several ways for a new and low-resourced language pair, building baseline systems with state-of-the-art architectures and developing language specific algorithms for better translation. Parallel datasets are mainly collected three different ways; i) translating Arabic texts into Turkish by professional translators, ii) exploiting the web for open-source Arabic-Turkish parallel texts, iii) using back-translation. We per-formed preliminary experiments for Arabic-to-Turkish machine translation with neural(Marian) machine translation tools with a novel morphologically motivated vocabulary reduction method. Introduction It is a well-known fact that to develop robust systems with data-driven methods, it is crucial to have large amounts of data. If the problem needs only raw monolingual data, the solution is straightforward; crawl the web and collect the data in the specific domain. In cases of annotating the data (e.g., treebanks) or parallel data (e.g., for machine translation) collecting the needed data is a bit harder. Even though machine translation (MT) is one of the popular topics in natural language processing, most of the existing parallel texts include English as one of the languages (e.g., Europarl (Koehn, 2005), Multi-UN (Eisele and Chen, 2010)). For the rest of the languages, generating a new language pair from scratch is tough work that needs extensive human effort and substantial funding. One way of translating languages with no parallel data is pivoting, which means one should find corpora for two language pairs such as sourceto-pivot and pivot-to-target with sufficient number of sentences in the same domain and then train and maintain two MT systems. Even though we can find such corpora in the expected domain for the given languages, the error propagation is the biggest problem of pivoting as the second system will try to translate erroneous output of the previous system. In this work, our goal is building an Arabic-Turkish machine translation on the news domain. The task is very interesting for several reasons; primarily, both the source and the target languages are morphologically rich which proves to be a quite challenging task. Our attention on this language pair has both social and political grounds. Arabic is the official language in most of the Middle East countries that Turkey has relations with. Moreover, there is a need for quick and cheap translation solutions in communicating with the increasing number of refugees in Turkish spoken areas. The news domain is selected as it has several benefits such as the fact that at least one side of the parallel texts can be found publicly on the web (e.g. several news portals) and Arabic is written in Modern Standard Arabic format for the news domain which is common for all Arabic speakers. To collect the data, both monolingual and bilingual data on the web is exploited. Selected portion of a monolingual data is translated into Turkish by professional translators, the publicly available but out-of-domain parallel data is cleaned and used directly and, lastly, rest of the monolingual Turkish data is back-translated to train our systems. Both unsupervised and supervised morphology reduction techniques are used to reduce the vocabulary size to a fixed number and let to fit our vocabulary into a given number of tokens while training the neural machine translation (NMT) systems . This paper is organized as follows; Section 2 gives brief information about the source and tar-get languages. Section 3 describes the data obtaining methods, and Section 4 introduces the segmentation methods for Turkish to alleviate the morphological differences and explains generation of surface word forms as post-processing. In section 5, we talk about our experimental setup including the data sizes and morphology abstraction/separation experiments with Marian (Junczys-Dowmunt et al., 2018) NMT tool. Finally, we conclude in Section 6. Arabic Arabic is a member of the Central Semitic language family. It is spoken by approximately 300 million people (ranked as sixth language) and accepted as official language in 27 countries (ranked as the third language after English and French). Arabic can be classified into three categories as; Classical Arabic (the language of the Qur'an), Modern Standard Arabic (is used in written texts and formal speeches, not a native language) and Arabic dialects (spoken by locals, mostly not written). Arabic is written from right to left with distinct 28 letters with various combinations of dots above or below these shapes. There are no capital letters. Roots are mostly composed of consonants and can have different meanings with the help of the vowels and diacritics. Arabic has a very complex and sometimes inconsistent orthography 1 . Arabic has a highly complex concatenative derivational and inflectional morphology. Words can take prefixes and suffixes at the same time for tense, number, person, gender information. For an example of the concatenation processes, the Arabic word, (gloss; and he will finish ) can be decomposed as (finished), (he will), and (and). Turkish Turkish is a member of the Ural-Altay language family and is the most commonly spoken Turkic language by more than 90 million people. It is the official language of Turkey and Northern Cyprus. There are lots of minority groups all over the world mainly in Europe (approximately 5M speakers). From the machine translation point of view, Turkish has interesting and challenging properties when compared to the mostly studied languages in data-driven MT research such as English, German, French and Spanish. First of all Turkish is a highly agglutinative language where words are formed by concatenating morphemes (by suffixation) with very productive inflectional and derivational processes. Turkish morpheme surface realizations are generated by several morphophonemic processes such as vowel harmony, consonant assimilation, and elisions. The morphotactics of word forms could be quite complex when multiple derivations are involved. Indeed, Turkish is one of the languages that needs special attention because of its morphological richness. An example of the Turkish morphology can be shown with the Turkish word partisindeydi (gloss: s/he was at his/her party), this word can be decomposed into four morphemes as parti (party), +si (her/his), +nde (in) and +ydi (s/he was). Obtaining Data The backbone of the machine translation system is a "good" data like the most of the machine learning problems. In case of MT, a parallel corpora is required. The domain of the data, the quality and the quantity directly effect the translation output. On the other hand, obtaining such data for the machine translation purpose is not that easy. There have been efforts made to obtain parallel texts for machine translation by crawling web for parallel data (Uszkoreit et al., 2010), and by using MechanicalTurk (Ambati and Vogel, 2010;Zbib et al., 2012). Even though we spent some efforts to use MTurk, it is not yet available for requesters outside USA. We specify three different ways to obtain the Arabic-Turkish parallel corpora; i) by translating Arabic texts into Turkish by professional translators, ii) by exploiting web for open-source Arabic-Turkish parallel texts and, iii) by back-translating monolingual Arabic data by using existing machine translation systems. Obtaining In-domain Training Data We selected approximately 170K Arabic sentences in the news domain from LDC datasets and had them translated to Turkish by professional translators in order to obtain gold-standard training data. Even though the translators are experts, quality assurance is an important issue. We aimed to avoid low-quality translations with a few steps. Before the translation process, we labeled each sentence to keep the parallelism in translations. This labeling is done to prevent translators not to join any two sentences or split one sentence into pieces while translating. Then, we asked each translator to translate 50 sentences. We analyzed the outputs, detected common translation errors and prepared a translation procedure for machine translation purpose. The translation procedure had rules such as; • Every information in the source sentence should be translated into Turkish. Neither addition nor deletion of a part of a sentence was allowed. • Translations should not have any meaning disorder or fluency problems. Constituents can be arranged due to grammar rules without changing the meaning. Phrases should be chosen as precisely as possible. • Each sentence should be translated independently, without considering the previous context. • Sentences in two different lines should not be combined into a single sentence or vice versa. After the translation was completed, we employed a bilingual consultant to randomly select 5% of the sentence pairs from each document and score them according to the quality of translation. If the quality is lower than given threshold, translators re-translated each problematic document once more. After this process, if the quality was still low, we rejected the translations for this document. We separated 1,600 sentences for development and 1,357 sentences for testing and demanded four Turkish references to be translated by four different translators. Table 1 shows the time and cost spent to generate the gold-standard translations for training and development. As seen in the table, generating a parallel corpora by human translation from Turkish to Arabic is a time and money consuming task as the number of such translators are limited 2 . Moreover, after spending a huge budget and time, the size of the corpora is not still sufficient to train a NMT system. These facts forced us to search the web for publicly available data. 2 As the Arabic part of the corpus is licensed by LDC, the generated corpora can not be shared with any third parties Searching Web for Publicly Available Data We exploited the web in order to take advantage of already existing parallel Arabic-Turkish data. We obtained two subsets of parallel data with small effort but both were out-of-domain. The corpora are; WIT: Web Inventory of Transcribed and Translated Talks (Cettolo et al., 2012) contains transcriptions of TED talks in more than hundred languages. We selected the IWSLT 2014 3 training data as it contains both Arabic-English and Turkish-English language pairs. Firstly, common talk titles are searched and then on these common talks, Arabic and Turkish sentences that have the same English translation for each talk are matched. As a result, 130K such Arabic-Turkish parallel sentences are obtained. OpenSubtitles2018 4 : OpenSubtitles2018 (Lison and Tiedemann, 2016) is a large database of TV and movie subtitles for sixty languages. The database has Arabic-Turkish parallel texts that contains almost 28M sentences. Even though these subtitles are aligned based on time stamps, the word order differences between the languages make one-to-one sentence alignment harder. To solve this problem and obtain more reliable parallel data, the text was re-aligned by a bilingual sentence aligner (Moore, 2002). Using this method, 21M out of 28M sentences are selected. Both WIT and OpenSubtitles2018 are out-ofdomain (OOD) for the news domain MT task, and the ratio of this OOD corpora to the news domain is huge (20M to 130K). To increase the size of the news corpora, we used a well known technique, backtranslation. Monolingual Turkish Data and Backtranslation In recently published NMT systems, backtranslation (Sennrich et al., 2016a) is applied commonly to increase the parallel corpora if the training data is limited. For backtranslation, two freely available monolingual Turkish news corpora CNN-Turk 5 (2.14M sentences) and Aljazeera 6 (718K sentences) are used. Collected monolingual Turkish corpora is preprocessed to separate each sentence to a line, to remove sentences only consisting of foreign words, symbols, numbers, and blank lines, and to replace carriage returns with line feed characters. Lastly, the corpus is sorted and the duplicate sentences are removed. After backtranslation, as automatic systems can not produce gold-standard translations for all sentences, we need to filter the translated output to obtain a "better" subset of it. We remove translations if; i) output has only one word, ii) the ratio of input/output words is more than three and, iii) any word except the Turkish stop-words repeats more than three times. After all the collection efforts, the size and the domain of the parallel corpora is shown Table 2. Previous Work Incorporating morphology when working with morphologically rich languages in SMT has been addressed by several researchers for many years. (Yang and Kirchhoff, 2006) decomposed the unknown source words at the test time into morphological subwords and translated these subwords that are unknown to the decoder by using phrasebased (PB) back-off models. For Arabic, (Zollmann et al., 2006;Sadat and Habash, 2006) exploited morphology by using morphologicallyanalyzed and/or tagged resources. (Popovic and Ney, 2004) presented different ways of improv-ing translation quality for inflected languages Serbian, Catalan and Spanish by using stems, suffixes and part-of-speech information. (Goldwater and McClosky, 2005) replaced Czech words with lemmas and pseudo words to obtain improvements in Czech-to-English statistical machine translation. (Minkov et al., 2007) used morphological postprocessing on the target side by using structural information and information from the source side in order to improve translation quality for Russian and Arabic. (Luong et al., 2010) proposed a hybrid morpheme-word representation in the translation models of morphologically-rich languages. The first effort for Turkish morphological segmentation, (Durgar El-Kahlout and Oflazer, 2010), used morphological analysis to separate some Turkish inflectional morphemes that have counterparts on the English side in English-to-Turkish statistical machine translation. (Bisazza and Federico, 2009) present a series of segmentation schemes to explore the optimal segmentation for statistical machine translation of Turkish. (Mermer and Akin, 2010) worked on unsupervised morphological segmentation from parallel data for the task of statistical machine translation. With the rise of neural machine translation, fitting the whole corpora into a fixed number vocabulary has become a challenge. Despite its success over the previous SMT methods, NMT has the lack of using large vocabularies as the training/decoding complexity is directly proportional to the vocabulary size. One solution is to limit the vocabulary size to a fixed number but this is a challenging problem especially for morphologically rich languages. A well-known and effective method to solve this problem is the Byte-pair encoding (Sennrich et al., 2016b) (BPE) which splits words into "reasonable" number of subwords to satisfy the fix vocabulary criteria. BPE is an unsupervised word segmentation method originally used as a word compression algorithm. It iteratively "merges" the most frequent character n-grams into subwords leaving no out-of-vocabulary words. BPE is totally statistical, likelihood-based word splitting method and involves no means of linguistic information. So, researchers exploit morphology once more to incorporate "linguistically" separated subword representation when translating from/to morphologically rich languages (Sánchez-Cartagena and Toral, 2016; Bradbury and Socher, 2016) with neural machine translation. Recently, (Ataman et al., 2017) incorporate both supervised and unsupervised morphological segmentation methods for Turkish sub-word generation for Turkish-to-English NMT. They used morphological features for the suffixes in order to decrease the sparseness caused by suffix allomorphy. Morphological Abstraction of Turkish The productive morphology of Turkish potentially implies a very large vocabulary size: noun roots have about 100 inflected forms and verbs have much more. These numbers are much higher when derivations are allowed. For example, one can generate thousands of words from a single root even when at most two derivations are allowed. Turkish employs about 30,000 root words (about 10,000 of which are highly frequent) and about 150 distinct suffixes. As an example to the morphological variation, in our Turkish corpora, the root word inisiyatif (literally: initiative) occurs totally 258 times in 47 different forms where 25 of these forms are singletons. Using morphologically segmented subwords is straightforward and sufficient when Turkish is on the source side of the translation. In case of Turkish is on the target side, any process such as segmentation or abstraction must be done more carefully as in the final representation the surface word should be generated. As a result, the "best" representation have to be selected that covers the whole information for Turkish words to generate the correct surface form. In this work, we present an abstraction method similar to our previous work (Durgar El-Kahlout and Oflazer, 2010). Our abstraction can generate back the surface form after translation easily which allows us to use this method even if Turkish is on the target side. Simply we abstracted all possible letters in the morpheme suffixes to alleviate the differences due to the morphophonemic processes such as vowel harmony, consonant assimilation, and elisions. First we apply a morphological analysis and detect the root and the morpheme of the word, and then on morpheme we replace i) vowels a and e to capital A (vowel harmony); ii) i, ı, u andü to capital H; iii)ǧ and k to K (consonant assimilation) and; iv) t and d to D (consonant assimilation). In order to combine the statistics and reduce the data sparseness problem, abstraction is a better choice for morpheme representation as most surface distinctions are manifes-tations of word-internal phenomena such as vowel harmony and morphotactics. When surface morphemes are considered by themselves as the units in BPE, statistics are fragmented. Table 3 shows examples of Turkish words in surface form, abstracted word and the gloss in English with highlights for the common parts. As seen in table, the first and the second columns share three morphemes +mAK+DA+DHr (Write Features) but differentiate on the surface form because of the morphophonemic processes. After the abstraction, the morphemes are same as in the English case. On top of abstraction, we also kept root +morphemes separated versions of the both surface and abstracted Turkish words and experimented with each scenario to understand the effect of abstraction and separation (Table 4 number (5)). In each case we also employed BPE for the vocabulary fitting. Table 4 shows a Turkish sentence with surface form, abstraction and separation and also BPE applied on each version. Root word inisiyatif (literally: initiative) separated by BPE into two or three segments depending on the length of the morphemes in the surface and abstracted representations. In representation (4), we observe that BPE tends to keep first (root) segment longer than the surface case because of the abstracted morphemes. By applying separation over surface or abstraction form, the effect of BPE is lost and only the unknown/singleton words are segmented by the algorithm as in the word IGAD in representation (6). Word Generation As stated above, making abstraction and/or segmentation processes on the target side always requires much more attention then the source side. Generating the correct surface form is crucial for the end user as they do not need to be aware of the inner representations. In order to generate the correct surface form, we employed an in-house morphological generation tool which transforms the given text with words in the format of root word and abstracted morphemes, to the correct singleword form. As a first step, this generation tool has been trained by a large Turkish corpus and works by simply creating a reverse-map through morphological segmentation of the corpus. This map contains root+morpheme sequences as keys and their corresponding surface word forms as values. (6) 4+BPE: Bu ortak inisiyatif kapsam +HnDA Sudan sorun +HnA kapsam +lH bir çözüm yer al +Hyor , I@@ G@@ AD inisiyatif +HnDA ise yalnzca güney +lA snr +lH . English: Within this joint initiative, there is a comprehensive solution to the Sudanese problem, while in the IGAD initiative it is limited to the south While creating this map, disambiguation step of morphological segmentation is omitted to increase the coverage, as keeping multiple resolutions for a surface word form will increase the number of keys for the reverse-map. Then the reverse-map is sorted by the number of occurrences of segmentation in order to select the most common ones. In our experiments, the reverse-map succeeds to recover the 92% of the abstracted words into surface forms successfully. For the rest of the words, we defined 23 hand-written rules to generate the words which works with 97% success. Defining the generation rules are not straightforward. For example the morphemes attached to the proper foreign words can be different depending on how the words are pronounced in Turkish. Machine Translation Setup All available data shown in Table 2 was tokenized, truecased (for Turkish) and the maximum sentence length were fixed to 90 for the translation model. As different segmentations of Arabic is out of our scope in this paper, we segmented Arabic prefixes and suffixes from with MADAMIRA (Pasha et al., 2014) with ATB parameter. To produce the abstracted Turkish words, the first step is the segmentation of morphemes and then an accurate disambiguation of the morphemes within the sentence. Thus, we first pass each word through a morphological analyzer (Oflazer, 1994). The output of the analyzer contains the morphological features encoded for all possible analyses and interpretations of the word. Then we perform morphological disambiguation using morphological features (Sak et al., 2007). Once the contextually-salient morphological interpretation is selected, we process the abstraction algorithm. On top of the abstraction and segmentation processes, we also trained BPE models over the training sets, for each language disjointly. For the neural machine translation experiments reported in this paper, comparatively new and better performing NMT architecture, Transformer (Vaswani et al., 2017) is used by Marian (Junczys-Dowmunt et al., 2018) toolkit. System is trained on a workstation housing 4 NVIDIA titan GPUs. The GPU memory parameters are set as follows; mini-batch-fit is checked, workspace reserved to 8000, and maxi-batch to 900. With this setup, 24k words/s training speed using all the GPUs in parallel is achieved. Transformer -type is employed for training. Depth of the network is set to 4, learning rate is set to 0.0001 with no warmup, and vocabulary size is set to 40k. Mini-batch-fit option is enabled. Usually it took 4-5 days to converge for the experiments. Our early stopping criteria is 20 runs without a BLEU (Papineni et al., 2002) increase. Moreover, we use Marian-decoder's beam search decoding with size 16. We ensemble two different models which resulted in the highest two BLEU scores on the development set during validation runs. We then merge the subwords back together in the hypothesis as described in 4.3. Results First group of experiments are performed to evaluate the effect of the data collected from different sources. As seen in Table 5, our baseline experiment is trained on the union of in-domain human translated corpora (BASE) and out-of-domain corpora WIT (OOD1) with a ratio 1:1. We did not perform with only BASE corpora as it is quite small to make sense for NMT training. On top of this experiment, we augmented corpora with approximately 2M backtranslated corpora (MONO) with a ratio almost 1:7. Even though this ratio is above the suggested (Sennrich et al., 2016a), we observed an improvement of +6 BLEU points. We argue that if the backtranslated data is preprocessed to satisfy some quality criterion as we described in Section 3.3, one can extend training corpora with much more backtranslated data. As a last experiment, we combined the Subtitles18 data (OOD2) with 21M sentences with a ratio 1:10 to the experiment (2). As a result, despite adding a huge out-of-domain, we again obtained an improvement more than +2 BLEU points. The improvement on BLEU scores seems lower than predicted when compared to the size of the data but we should be aware of that the OOD2 corpora share very limited part with news domain. For the second group of experiments, we investigate the effect of abstraction and segmentation of Turkish. In experiment (3), we applied three different segmentation/abstraction representations. In the first representation (exp. 4), we separated root words and morphemes into two (e.g. kahrolmaktadır as kahrol +maktadır), in second representation (exp. 5), we only employed abstraction (e.g. kahrolmaktadır as kahrolmAK-DADHr) and in the third representation (exp. 6), we applied both segmentation and abstraction to- Table 5: Arabic-to-Turkish MT BLEU scores due to the different traning corpora gether (e.g. kahrolmaktadır as kahrol +mAK-DADHr). It is noticed that both segmentation and abstraction processes help to improve the translation. The improvement caused by segmentation is expected as supported with previous researches. The results achieved by this work show that our novel abstraction representation is a better alternative than segmentation to help BPE for Turkish. We observe almost no improvement with segmentation (some small positive change in development data) but an improvement of +0.8 BLEU with abstraction even with huge training data of 24M sentences. Similarly, combining both segmentation and abstraction in one representation does not help the system as much as abstraction does. As this work is the first attempt for Arabic-to-Turkish MT to our best knowledge, in order to compare our systems, we also translated test data with Google 7 and Yandex 8 and listed the scores in last two rows. The unique word counts (vocabulary) after each representation are shown in Table 6. It is noticed that just separation root words and morphemes drops the vocabulary more than half but as the final vocabulary is fitted to 40K this reduction does not make a significant impact on the translation. The small count increases in the abstracted representations comes from the different morphological disambiguations of the same word. In the following example, we show both ours and Google translations of an Arabic sentence. Even both of the translations are almost perfect, there is an important difference in handling the correct tense selection (present vs. past tense). Our translation selects the more suitable tense than Google translation which is also closer to the reference. Table 6: Type and size of the corpora used in the experiments. Conclusion This paper focused on machine translation system for a new low-resourced language pair Arabic-Turkish in news domain which is the first effort for this language pair to the best of our knowledge. We obtained standard in-domain data by human translators. As this method is both time consuming and expensive, we exploited publicly available corpora such as TED talks and subtitle translations. Later, we backtranslated monolingual Turkish news corpora. Finally, we performed experiments with all of these corpora and reported +8 BLEU increase over the baseline setup for stateof-the-art neural machine translation system Marian. On top of these experiments, we also incorporate language specific processes such as the abstraction of morphemic processes caused by vowel harmony and consonant assimilation. We showed an improvement of +0.8 BLEU points with our abstraction representation. We also run a morphological generation tool after the translation process which covers 98% words correctly. Our future work includes applying the same abstraction algorithm to Turkish while translating from/to other European languages.
v3-fos-license
2019-03-18T13:58:24.520Z
2018-01-01T00:00:00.000
81163295
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.oatext.com/pdf/IOD-4-200.pdf", "pdf_hash": "a78c33403ca9e3b5e022f2bbd4c4201fc11a7d9c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:59", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "sha1": "45b34271691d992aafe18fb794866d3135e97d9e", "year": 2018 }
pes2o/s2orc
Diseases and health risks associated with obesity Obesity is an evolutive, chronic and recurring medical condition that consists of a pathological accumulation of adipose tissue, in absolute values and percentages, in relation to the lean mass, to such an extent as to negatively affect the state of health. It is a real metabolic disease, which compromises the regulation of appetite and energy metabolism. Obesity is the leading cause of preventable death worldwide, with increasing prevalence in adults and children, and is considered to be one of the most serious public health problems of the 21st century. According to estimates published recently [1] more than 1.5 billion people are overweight or obese and of these 200 million men and 300 million women are obese. So much so that the World Health Organization (WHO) introduced in 2001 the expression globesity, crasis between the English words of obesity and globality, to indicate the global epidemic of obesity on a planetary level. In 2010 more than 5 million children under the age of 5 were already overweight or obese and over 60% of the world's population lives in countries where overweight and obesity kill more by now than malnutrition [1]. In fact, it is noted that in the richer countries, for the first time, average life expectancy begins to decline, perhaps above all due to the effect of obesity. Currently, in half of the 34 member countries of the OECD (Organization for Economic Co-operation and Development) the total number of overweight and obese people exceeds the number of normalweighted or underweight people. In fact, therefore, in many countries the concept of normopeso does not correspond more to normality in a purely statistical sense [2]. The highest statistical tips are found in the Federated States of Micronesia and in one of them, Kosrae, on a population of just over 8000 inhabitants, 88% of adults are overweight, 59% are obese and 24% are severely obese (BMI> 35). In Louisiana and Mississippi, US states, one in three people is obese, respectively 33.4 and 34.9%). Regarding the WHO European Region, in 2008, over 50% of the adult population were overweight and about 23% of women and 20% of men were obese [3]. Epidemiology Obesity is an evolutive, chronic and recurring medical condition that consists of a pathological accumulation of adipose tissue, in absolute values and percentages, in relation to the lean mass, to such an extent as to negatively affect the state of health. It is a real metabolic disease, which compromises the regulation of appetite and energy metabolism. Obesity is the leading cause of preventable death worldwide, with increasing prevalence in adults and children, and is considered to be one of the most serious public health problems of the 21 st century. According to estimates published recently [1] more than 1.5 billion people are overweight or obese and of these 200 million men and 300 million women are obese. So much so that the World Health Organization (WHO) introduced in 2001 the expression globesity, crasis between the English words of obesity and globality, to indicate the global epidemic of obesity on a planetary level. In 2010 more than 5 million children under the age of 5 were already overweight or obese and over 60% of the world's population lives in countries where overweight and obesity kill more by now than malnutrition [1]. In fact, it is noted that in the richer countries, for the first time, average life expectancy begins to decline, perhaps above all due to the effect of obesity. Currently, in half of the 34 member countries of the OECD (Organization for Economic Co-operation and Development) the total number of overweight and obese people exceeds the number of normalweighted or underweight people. In fact, therefore, in many countries the concept of normopeso does not correspond more to normality in a purely statistical sense [2]. The highest statistical tips are found in the Federated States of Micronesia and in one of them, Kosrae, on a population of just over 8000 inhabitants, 88% of adults are overweight, 59% are obese and 24% are severely obese (BMI> 35). In Louisiana and Mississippi, US states, one in three people is obese, respectively 33.4 and 34.9%). Regarding the WHO European Region, in 2008, over 50% of the adult population were overweight and about 23% of women and 20% of men were obese [3]. Clinical Events and Comorbility Overweight and obesity are the cause of physical disability, reduced working capacity and predispose the onset of numerous chronic diseases. Obesity increases the risk of many physical and mental illnesses. It is mainly associated with metabolic syndromes, combinations of medical disorders that include type 2 diabetes mellitus, hypertension, hypercholesterolemia and hypertriglyceridaemia. In general, the health consequences of obesity fall into two broad categories: the diseases attributable to the effects of increased body fat (such as osteoarthritis, obstructive sleep apnea, social stigmatization) and those due to the increase in the number of fat cells (diabetes, dyslipidemia, cancer, cardiovascular disease, non-alcoholic fatty liver disease) [4]. The correlation between obesity and specific conditions varies; one of the highest concerns type 2 diabetes: the excess of adiposity is in fact at the base of 64% of cases of diabetes in men and 77% of cases in women [5]. The increase in fat mass alters the response to insulin, favoring insulin resistance; the increase in adipose tissue also generates a proinflammatory state [6,7] and a prothrombotic [8,9]. Finally it should be noted that the presence of obesity increases the risk of development of some cancers such as endometrial cancer, colon-rectal cancer, gallbladder and breast [10]. The risk of developing diseases increases with increasing body mass index. Quality of Life of Obese Patients Obesity is frequently associated with a worse quality of life. The diseases associated with obesity mentioned above can contribute to reducing the quality of life since they reduce the physical abilities of patients (let's think for example of cardiological, respiratory or osteoarticular problems) and involve chronic pharmacological therapies that can in turn be associated with side effects. Some disorders such as depression, infertility or impotence can also further reduce the quality of life of patients in relation to interference with interpersonal relationships. Finally, within social life there is a wide range of repercussions such as discrimination in the workplace, difficulties in everyday situations such as the search for clothing, the way to travel to daily activities such as washing or climbing stairs. Mortality Data If we consider the diseases and health risks associated with obesity, it is clear that obesity can be an important factor in mortality. It is believed that every year in Europe 320,000 people die from causes related directly to obesity; the mortality correlated to excess weight therefore represents a serious public health problem where about 7.7% of all causes of death are due to excess weight. Life expectancy in the severely obese population is reduced, with a shortening of life expectancy of about 7-10 years, with a risk of death that increases with the increase in body mass index and abdominal circumference [11]. Assuming that the association between high body mass index and mortality is largely causal, the proportion of premature deaths that could be avoided by changing the lifestyle and reducing weight would be about one in seven in Europe and one in 5 in North America. Furthermore, the excess risk for premature death, ie before 70 years of age, among those who are overweight or obese is about three times greater in men than in women. This is consistent with the fact that obese men have higher insulin resistance and a higher risk of diabetes than women [12]. Diagnosis Obviously the discrepancy between excess and energy expenditure remains the basis for attempting to explain the onset of excess weight but can not fully explain the mechanisms underlying weight gain. That obesity has a hereditary component is a known fact, but it is only a few decades ago that genetic research has begun to define the role of biological inheritance with respect to environmental influences in the regulation of body weight. For the common obesity the genetic influences seem to express themselves through susceptibility genes that significantly increase the risk of developing a condition of marked overweight but they are not all indispensable for its expression and alone are certainly not sufficient to determine it. Apart from the rare monogenic obesity and obesity related to other genetic syndromes, such as Prader-Willi syndrome or Albright or Bardet-Biedl syndrome, the genes involved are certainly very numerous. From studies on human obesity and especially through animal models, hundreds of candidate genes have been identified, but at present it is not yet clear how genetic variables can interact with individual responses to nutrition and disorders related to different feeding modalities. Obesity, like most chronic degenerative diseases, recognizes a multifactorial genesis, where inheritance and the environment are associated in an intricate manner, albeit in ways that are still partially unknown. Obesity appears at the moment as a genetically complex condition that tends to be associated with a multitude of complications, such as insulin resistance, type 2 diabetes, dyslipidemia, arterial hypertension, etc. These, in turn, are certainly influenced by lifestyle and eating habits but are also linked to genetic predispositions, often independent of one another and very difficult to disentangle and identify [13]. It would therefore be advisable to use the term in the plural, obesity; a term that better understands the different phenotypic expressions collected, at the moment, in the generic definition of obesity. The measurement of the distribution of body fat can be performed by different methods: from the measurement of skin folds, to the measurement of the waist circumference or its relationship with the circumference of the hips, or with more sophisticated techniques such as ultrasound, axial tomography or magnetic resonance. More simply, but still in a statistically effective manner, the classification of the population based on weight is made using the body mass index (BMI = body mass index, according to the Anglo-Saxon expression), which is considered the most representative of the presence excess body fat. The BMI is calculated according to the following formula: BMI = weight (in kg) / square of height (in meters). The weight classes for adults indicated by the BMI are: <18.5 underweight; 18.5-24.9 normal weight; 25-29.9 overweight; >30 obesity. Naturally there are differences related to sex; with the same BMI, women tend to have more body fat than men, as well as older people compared to young people. Moreover, those who have a physical body can weigh more precisely thanks to the highly developed muscle mass, but do not fall for this in the overweight or obesity category. For people under 19 years, the growth curves of the WHO and the threshold values recommended by the International Obesity Task Force (IOTF) that take into account age and gender are used for weight classification. Prevention and Screening Living nowadays, in an obeseogenic environment, the treatment of obesity becomes complex and easily recurrent. It is therefore necessary to invest in prevention tools that reduce the impact of the environment on individuals. Awareness-raising campaigns on the general population, above all aimed at young people to adopt healthy lifestyles, thanks to a correct diet and adequate physical activity, can allow you to control your weight and avoid exceeding the levels of risk. Selective psychoeducational intervention programs aimed at groups of people presenting risk factors significantly higher than average and above all interventions aimed at individuals at high risk, carriers of biological markers or premorbid symptoms appear even more useful. Particularly effective are programs dedicated to subgroups of children or adolescents at greater risk to which primary and secondary prevention interventions of a selective type and indicated to their age group are applied, which allow to save resources and increase effectiveness with respect to interventions. of a universalistic type. Useful and reachable objectives through primary prevention could be characterized by ensuring easy access to consultation and care services, counseling to families of subjects at risk, dissemination of basic knowledge on health promotion and protection, improving the communication flow between the institutions responsible for prevention (schools, local health agencies, general practitioners, pediatricians of free choice, press and information agencies, voluntary and consumer associations, etc.) and users [13]. Treatment The high clinical complexity and the comorbidities connected to obesity necessitate an approach, as for other eating disorders, multidimensional and interdisciplinary that involves a wide range of skills: nutritional, internal, psychiatric and surgical and therefore of different professional figures, Last but not least the general practitioners who know better the patients and the family and social environments in which they live and interact, all involved in interconnected teams and oriented to a punctual management of clinical outcomes. The task of obesity therapy is not only linked to weight loss but also to the prevention or treatment of related diseases. Naturally, weight loss is a substantial condition but must be assessed not only on the extent of weight loss but also on clinical and psychological parameters and on the quality of life of the obese patient. Weight reduction, although modest, is a useful tool and has more value the longer it remains stable over time. The treatment of obesity is based on a stable change over time of the alimentary behaviors and of the psychic processes connected to them. To date, nutritional intervention remains the foundation in the treatment of obesity; the reduction of the caloric intake allows, at least in the short term, to reduce the adipose mass but already relatively quickly induces a reduction of the metabolism, through energy-metabolic regulation mechanisms that tend to energy saving [14]. These defense mechanisms support, first a reduction of the decline, and then a substantial return to weight before the diet, the socalled weight cycling syndrome, which involves frustration, a sense of inability and devaluation. Therefore, it is essential to also promote an increase in energy expenditure, through a balanced and progressive increase in physical activity, compatible with the patient's clinical condition, so as to keep active the metabolic, cardio-respiratory and even psychic processes. In an obese patient already walking for 30-45 minutes a day can allow, together with a balanced diet, an acceptable weight loss around 10% of the initial weight in a few months. Since obesity is a chronic condition for these results to be maintained over time, it should be accompanied by support programs, such as integrated psychoeducational or psychotherapeutic interventions of a cognitive or problem-solving type, aimed at modifying the lifestyle in certain types of patients. Also the use of drugs, which can facilitate the reduction of caloric intake or energy expenditure, could be useful in obese patients with BMI above 30, even if at present there is still no ideal drug. Beyond all the various fanciful and sometimes dangerous combinations of drugs, it is possible to use some molecules, recognized as valid by the national regulatory body, to be combined with diet and exercise, which can help weight loss. At the moment only two molecules are available in Italy: the orlistat, which inhibits gastrointestinal lipases, reduces the absorption of fats and the liraglutide that is part of a new generation of analogous GLP-1 drugs, able to regulate the metabolism of the glucose and mimics the way the intestine communicates signals to the brain to regulate appetite, reducing food intake. Another opportunity in the treatment of large obesity, BMI> 40 or with BMI> 35 in the presence of associated co-morbidities, is offered by bariatric surgery that allows to determine a significant weight loss in the long term. At least two-thirds of the obese pathological subjects who have chosen bariatric surgery can lose about 50% of the excess weight over 10 years and beyond when they are motivated to do so and adhere to the therapy. Bariatric surgery has a particularly advantageous cost / benefit ratio (from the first year of treatment), and often allows a significant saving on socio-health costs compared to the conservative approach [15]. Surgical Interventions Interventions that limit the introduction of food a. mainly mechanical action (restrictive interventions): adjustable gastric banding, vertical gastroplasty and sleeve gastrectomy. b. Primarily functional action: gastric bypass and variants Interventions that limit energy absorption: biliopancreatic diversion and duodenal switch [16]. The best results are obtained if the pre-operative and post-operative intervention of the educational and psychological paths, individual or group, that support the patient for a sufficiently long period [17] is associated. A further therapeutic approach is the multidisciplinary rehabilitation of obesity, which is often reserved for high-grade obesity or with significant complications; this system is based on management by a group of more professionals (internists, nutritionists, psychiatrists, psychologists, dietitians, physiotherapists and nurses, etc.) in the context of multiple settings (outpatient clinic, residential rehabilitation, etc.). This approach tries to reconstruct correct eating habits in the patient, improve his ability to manage body weight, reactivate muscle structures and recover joint mobility, improve the cardiovascular and respiratory system, increase energy expenditure, increase the ratio of lean mass / fat mass and reduce body weight. Assistance Network Obesity must be considered a real disease that must be treated to live longer and with a better quality of life. Therefore, a significant and important cultural and clinical transition is necessary. It is therefore necessary to formalize and implement at national level a more advanced, correct and adequate criterion for the problems of overweight or obese people, structuring a specific Diagnostic Therapeutic Assistance Path (DTAP), as for other eating disorders, organized as a network multidisciplinary with professionalism ranging from the clinical psychologist to the bariatric surgeon, from the pediatrician to the nurse, from the general practitioner to the internist, all involved in teams integrated with regional hubs and various spokes at the local level in the various ASLs. In this way, all patients would be guaranteed the transition from "care episodes" to an approach based on "pathways of care and assistance". This DTAP model would favor not only a multidisciplinary care of the patient but also a whole series of virtuous processes, in a suitable environment, universalism in access, sustainability and economic rationality, risk reduction and finally continuity of care. It could also be very useful to involve the associations of patients and their family members who would also guarantee the voice and needs of the users, to avoid that many people turn to unqualified facilities or operators or even to the many selfstyled healers and sellers of false hopes [18][19][20]. Conclusions The concept of obesity has an ancient history: the expression obesity (obesitas) was used for the first time, almost 2000 years ago, by Aulo Cornelio Censo to indicate an excess of fat but the excess fat had already been described in Egyptian papyri around 1500 BC and from Hippocrates in the 4th century BC Naturally, they were problems linked to small sections of the population, the nobles and the rich, who had access to great food availability and did not need to work; something uncommon in times of war, famine and pestilence. The substantial point is related to the dyscrasia that has developed in recent decades in which increasingly complex societies have been built, in which access to food has become easier while the mechanization of work and transport has led to an ever-lower expenditure of power. Basically, the genetic / biological fund of man struggles not a little to adapt to modernity. Structured to save energy and survive the phases of nutritional deficiency, the human metabolic machine must row countercurrent in an era in which the energy excess compared to consumption is practically the norm. The evolution of millennia has been displaced by the new standards of life. But in spite of technological and cultural advances in eating, modern phylogenetic behavioral residues are still resisting in modern humans, such as eating compulsively large quantities of food in the shortest possible time. It is a primordial mechanism, developed over time, to allow the body to take energy, when available, in greater quantities than necessary to balance the balance of the moment, signaled by the feeling of satiety. The goal was to allow to store reserves for moments of energy shortage. Operation once providential but currently not very suitable, rather unsuitable compared to the excessive energy availability available today for a large slice of the world population. Ancestral behavior is present in binge eating, mainly related to states of emotional stress and psychological disorders. The same low-calorie diets, so current nowadays, represent for the metabolic balance of the organism, a condition of distress such as to favor, in the medium-long period, an alteration of the control on natural hunger / satiety devices, until at the appearance of compulsive feeding phenomena (binge eating). Dieting, or dieting on a diet, is a major cause of eating disorders, particularly Binge Eating Disorder and obesity. Hence the well-known paradox: diets eventually make you fat. The main treatment against obesity consists in the association of diet and exercise. An accurate diet can cause weight loss in the short term; however maintenance is often difficult and requires constant effort in physical exercise as well as proper nutrition. The success rate of longterm weight maintenance with lifestyle change ranges from 2 to 20%. As for drugs, scientific research has always been working to identify new molecules to be associated with the diet. At the moment the doctor has numerous drugs to treat diseases associated with obesity but with regard to the treatment of obesity he has at his disposal only drugs able to act on the reduction of fat absorption that can find indications in selected cases. On the market there are also various herbal products, supplements or cocktails of various products that patients can easily find but which are often ineffective if not even in some cases highly detrimental to health.
v3-fos-license
2018-12-15T11:09:05.840Z
2017-07-04T00:00:00.000
56315702
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.banglajol.info/index.php/IMCJMS/article/download/33093/22328", "pdf_hash": "969b792007347476803cce7cecdcec0e338ec670", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:62", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "sha1": "969b792007347476803cce7cecdcec0e338ec670", "year": 2017 }
pes2o/s2orc
Health related quality of life in children with autism spectrum disorder in Bangladesh Background and objective: Autism spectrum disorder (ASD) is considered as an emerging problem in our socioeconomic context. The objectives of this study were to compare the health related quality of life of children with autism spectrum disorder to that of typically developing peers. Methods: A cross sectional comparative study was conducted on autistic children and normal children in six centers of Dhaka city to see the health related quality of life from parent’s perspective by using the Pediatric Quality of Life Inventory 4.0 Generic Core Scales (PedsQL scale). Total of 115 children within the age group of 8-12 years were selected, among them 57 were autistic and 58 were normal peers. Results: Children with autism spectrum disorder had poor physical (mean score 6.04), emotional (mean score 9.77) and social (mean score 14.51) functions as well as learning ability (mean score 8.12) whereas normal children’s functioning mean scores were 0.10, 1.79, 0.0 and 0.45 in respective domains and the differences were significant (p<.0001) in each aspect of quality of life. Conclusion: This study revealed that, children with autism spectrum disorder experienced poorer health-related quality of life than normal children and thus the findings would contribute in implementing different strategies for improving the health status of autistic children. IMC J Med Sci 2017; 11(2): 40-44 Introduction Autism spectrum disorder (ASD), sometimes referred to as "autism" is a chronic disorder whose symptoms include failure to develop normal social relations with other people, impaired development of communicative ability, lack of imaginative ability, and repetitive, stereotyped movements [1].ASD affected individuals have markedly different social and emotional behaviors than non-autistic individuals.ASD also has an effect on intelligent quotient (IQ).About 30% of individuals with autism have an average or gifted IQ, while 70% are considered mentally retarded [2]. Health related quality of life in children with ASD is poor than the normally developing peers.In developing countries like Bangladesh, autism is considered as a curse.People of the society, sometimes parents are also ignorant about their children's physical and mental condition.Moreover, institutions involved with the treatment and improvement of the health of the autistic children cannot properly deal with problems associated with autism.The last decade has witnessed a significant increase in the utilization of health related quality of life (HRQL) instruments in an effort to improve patient health outcomes and to determine the efficacy of healthcare services [3,4].HRQL explores the well-being of individuals with various medical conditions and disabilities.Therefore, by using this scale we can estimate the actual health condition of the autistic children (8-12 years) and compare that with the normal developing child.The findings would be helpful for the effective management of autistic children.Therefore, the preset study was undertaken to find out the HRQL and socio-demographic characteristics of children with autism. Study population and place: This comparative cross sectional study was conducted in three centers which deal with autistic children and comparison was done with normal children from three other centers.Bangladesh Protibondhi Foundation (BPF), Kalyani, Institute of Neurodevelopment and Research Centre and Society for the Welfare of the Intellectually Disabled, Bangladesh (SWID Bangladesh) and its sister wing -Ramna Protibondhi Shongstha was chosen for data collection from parent of autistic children. Total 115 children were selected for the study.Among them 57 were autistic child and 58 were normally developing peers.The age range of both autistic and normal children was 8-12 years.In this study, children of this age group was selected because 8-12 years children are more appropriate for assessing the questions used in Pediatric Quality of Life Inventory 4.0 Generic Core Scales (PedsQL).Children with ASD were eligible to participate in the study if (a) they have one of the three ASD diagnosis e.g.autism disorder, pervasive developmental disorders not otherwise specified or Asperger disorder, (b) they are not suffering from other complicated diseases and (c) the parents of autistic child willing to provide data. Research instrument: Data were collected from the parents (either mother or father) of the children because the autistic children could not provide the actual data that was needed and was collected by a semi-structured pre-tested interview questionnaire by considering all possible variables according to information, developed on the basis of relevant literature.The questionnaire contained sociodemographic characteristics of children and their parents along with the questions (modified) used in the PedsQL [5][6][7][8] to measure HRQL.The parents rated children's HRQL over the past month on a 5point scale ("never a problem" to "always a problem") scored from 0 to 4 with lower scores indicating better HRQL.The PedsQL comprises a physical health summary (physical health subscale-8 items) and a psychosocial health summary (emotional-5 items; social-5 items; and school-5 items functioning subscales).During calculation for each child's total health summery in 4 domains, 0-4 was considered as good and above 4 was considered as poor heath function. Procedure of data collection: Before getting started, permission for data collection was taken from every school.Data were collected from the parents at the school premises by face to face interview.The data collection for each participant required two or three visits within a 4-week period at a location of six study places.During the first visit, eligibility criteria were confirmed, and during the second visit, the PedsQL was administered.Parents or school authority were bound to provide a copy of the medical report documenting an ASD diagnosis.Healthy control children in the peer group met the same inclusion criteria except for a diagnosis of the autism spectrum.Ethical approval was obtained from institutional review board of American International University, Bangladesh (AIUB).Informed written consent was obtained from all participants and facilities involved in the study. Result The study was carried out among 115 children, 57 of them were autistic (ASD) and rest were normal healthy children.The ASD group comprised of 44 boys and 13 girls, with a mean age of 9.67±1.42years.The comparative healthy peer group comprised of 43 boys and 15 girls, with a mean age of 9.66±1.40years.Participating families of both groups belonged to middle to higher socioeconomic status.In autistic group, 50.9% parents had graduate and postgraduate level of education and it was 75.9% in normal group.Educational status among the respondents of normal child was higher than the respondents of autistic child.Of the total 57 respondents of autistic child group, 61.4% were housewives, 28% service holder (both govt.and private) and rests were businessman (5.3%), unemployed, retired and agricultural worker (each 1.8%) whereas majority (51.8%) in control normal child group were service holder.Monthly family income of the all respondents ranged from Tk.10000 to Tk. 300,000 taka.Of the respondents of autism and normal healthy groups, 54.4% and 48.3% respectively had monthly family income of Tk. 25001 to Tk. 50000 taka.The socio-demographic characteristics of the parents of both groups were almost similar. Table-1 shows that the children with autism spectrum disorder had significantly higher mean scores for physical (mean 6.04), emotional (mean 9.77) and social functions (mean 14.51) as well as for learning ability (mean 8.12) compared to the normal children's mean scores which were 0.10, 1.79, 0.0 and 0.45 in respective domains.Higher mean value of all these variables for autistic children than that of normal children indicated that autistic children had very lower quality of life. Discussion The Center for Disease Control and Prevention states that the prevalence of autism is increasing at epidemic rates [9].For decades since first described by Leo Kanner in 1943, autism was believed to occur at a rate of 4-5 per 10,000 children [10].From surveys done between 1966 and 1998 in 12 countries (e.g., United States, United Kingdom, Denmark, Japan, Sweden, Ireland, Germany, Canada, France, Indonesia, Norway, and Iceland), the prevalence ranged from 0.7-21.1/10,000population, with a median value of 5.2/10,000 (or 1/1923) [11].The most recent results from the Centers for Disease Control and Prevention (CDC) suggest that, in the United States, the prevalence of ASD is 1/70 boys and 1/315 girls, yielding an overall rate of 1/110 [9].This is nearly identical to the overall prevalence from a recent British study [12].In our country it has been reported that out of every 94 boys, one is affected by autism.For girls, it is one in every 150.In Bangladesh, no systematic research has been carried regarding the magnitude or prevalence of autism but it is assumed that about 300,000 children are affected [13]. Previous studies related to HRQOL of autistic children tried to find out the agreement between children self and parent's proxy report as well as children's QOL and result revealed lower QOL of autistic group than normal peers.In this study, we have used the data from parent's perspective.This study set out to increase our knowledge of children with ASD's HRQL compared to typically developing peers from the parents' perspective. The study revealed significantly poorer HRQL for children with ASD than their peers for the physical, emotional, social functions as well as learning ability at school.Children with ASD had consistently poor performance in those parameters. In relation to HRQL parameters it appeared that the children with ASD had lower well-being, which should be addressed by service providers.This confirms previous findings in children and adolescents with ASD and high functioning autism (HFA), but uses a control group of typically developing children instead of normative data [5,7,[14][15][16][17]. ASD has been and continues to be a major health issue in our current society.Significant numbers of children are being diagnosed with ASD each year, and this includes young adults, indicating a need to increase the understanding and awareness of the general public.This study would help the policy makers and administrators to find out the actual condition of autistic children in comparison to normal one and thus would contribute in implementing different strategies for improving health status of autistic children. Table - 1 : The mean PedsQL score of autistic and normal healthy children
v3-fos-license
2022-10-02T15:11:44.303Z
2022-09-01T00:00:00.000
252662703
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1016/j.jseint.2022.09.008", "pdf_hash": "066c9b04ecb3c16bff3bd2733c3930a5b628e477", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:63", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "sha1": "d3f3aabd8a927be902f037db2c257e0550693c72", "year": 2022 }
pes2o/s2orc
Active range of motion of the shoulder: a cross-sectional study of 6635 subjects Background Normative data for passive range of motion are well established, but daily living is comprised of active motion. The purpose of this study was to establish normative values for active range of motion of the shoulder across age, sex, and arm. Our hypotheses were that active range of motion of the shoulder (1) decreases with age group, (2) differs between males and females, and (3) differs between the right arm and left arm. Methods Shoulder active range of motion was captured with an eight-camera markerless motion capture system. Data were collected for a heterogenous sample of 6635 males and females of all ages. For each subject, 6 shoulder motions were collected with maximum values measured: external rotation, internal rotation, flexion, extension, abduction, and horizontal abduction. Three-way repeated measures analyses were performed, with 2 between-subject factors (age group and sex) and 1 within-subject factor (arm). The unadjusted threshold for statistical significance was α = 0.05. Results External rotation decreased with age (approximately 10° decrease from below 30 years to above 60 years). External rotation was approximately 5° greater in the right arm, whereas internal rotation was approximately 5° greater in the left arm. Flexion decreased with age (approximately 15° decrease from below 20 years to above 60 years). For age groups from 10 to 59 years, extension and horizontal abduction were approximately 5° to 10° greater in females than males. Abduction was greater for females than males. Abduction was also greater in younger people (aged 10-29 years) than older people. Conclusion In general, active range of motion of the shoulder decreases with age. Sex (male/female) and arm side (right/left) also influence shoulder range of motion. The anatomy of the shoulder complex allows for a wide range of activities in daily living and athletic activity. Deficiencies in shoulder range of motion can predispose athletes and others to shoulder injury. 12,22,23 Conversely, return to full range of motion can be challenging after surgical or nonsurgical treatment of shoulder injury. 20,21,24 Thus, comparing a person's shoulder range of motion to normative values of healthy individuals is vital in the prevention and rehabilitation of shoulder injuries. Numerous studies have reported passive range of motion of the shoulder as defined by the American Academy of Orthopaedic Surgeons, 1 showing changes across age groups, 2,4,8,11,13 sex, 2,8,11 and side-to-side 2,6,14,[17][18][19][21][22][23] in athletes, patients, and the general population. Although passive range of motion is well established, daily living is comprised of active motion. Unfortunately, diagnosis of shoulder active range of motion has been limited due to the complexity of shoulder motion, the contribution of scapulothoracic motion to shoulder movement, and the use of manual instruments for measurement. 2,11 The advent of new technologies may allow us to establish normative values of shoulder active range of motion and improve our ability to screen and prevent shoulder injuries. One such technology approved by the Food and Drug Administration is the DARI markerless motion capture system, which has been used previously to assess lower extremity and full body movements in relation to injury risk and rehabilitation. 3,5,7,9 This system uses 8 cameras and software-based algorithms to measure active range of motion of each joint. The purpose of this study was to use markerless motion capture to establish normative values for active range of motion of the (2) there would be differences between males and females, and (3) there would be differences in range of motion between the right and left arm as most individuals are right handed. Methods Sterling Institutional Review Board determined this retrospective analysis of anonymous, previously collected clinical data to be exempt from institutional review board review. From 2018 to 2022, an active range of motion data were collected for patients, athletes, and clients at 51 facilities throughout the United States via multicamera markerless motion capture technology (DARI Motion, Overland Park, KS, USA). At each facility, a motion capture system collected data with 8 cameras electronically synchronized to their mainframe computer. The markerless system used Captury Live (The Captury GmbH, Saarbrücken, Germany) tracking software, which implements methods previously described. 16 Both the human body and image domain were represented by Sums of spatial Gaussians. The skeletal motion was estimated by optimizing a continuous and differentiable model-to-image similarity measure. While in the work of Stoll et al, 16 the similarity measure was based on color similarity; the current approach combines the color similarity with a term computed by a Deep Convolutional Neural Network. The DARI markerless system has been used in several recent studies to measure active range of motion for a variety of activities. 3,5,7,9,10,15 One of these studies compared the markerless motion data to a "gold standard" marker-based data and showed similar consistency of joint angle calculations for each system, although the magnitudes of angle measurements differed between the 2 systems. 10 For each subject, 6 shoulder motions were collected with maximum values measured: external rotation, internal rotation, flexion, extension, abduction, and horizontal abduction. Instructions for the 6 motions are shown in Table I. To eliminate a weighting bias from multiple entries from some subjects, only one capture for each participant was included in the analyses. Data from all facilities were deidentified before the researchers receiving them for use in the present study. Subjects were categorized by their sex (male or female) and by their age group (10-19, 20-29, 30-39, 40-49, 50-59, 60-69, or >69 years of age). For each shoulder parameter, a 3-way repeated measures analysis was performed, with 2 between-subjects factors (age group and sex) and 1 within-subject factor (arm). If there was a significant interaction effect, associated main effects were not reported, with subsequent analyses looking at simple main effects. All pairwise comparisons included a Bonferroni adjustment for multiple comparisons. For all analyses, the unadjusted threshold for statistical significance was set at a ¼ 0.05. Partial eta-squared (h p 2 ) was reported as a measure of the effect size, with the standard minimum thresholds of 0.01, 0.06, and 0.14 used to determine small, medium, and large effect sizes, respectively. Results Data were captured for 6635 individuals at athletic team and performance centers (n ¼ 3980 individuals), health care centers (n ¼ 2142), wellness facilities (n ¼ 327), and military bases (n ¼ 186). The resulting number of subjects in each age-sex combination is provided in Table II. The mean values and confidence intervals for each shoulder motion are presented in Fig. 1 and Table A-1 (in Appendix A). Data are shown for each sex (male and female) and each arm (right and left) for each age group. Significant differences between age, sex, and arm groups are shown in the tables in Appendix B. External rotation External rotation decreased significantly with age (P < .001), and the effect size was small, approaching medium (h p 2 ¼ 0.057). Among the 21 age group pairwise comparisons, 17 showed significant differences (Table B1-1). There was also a statistically significant but trivial interaction effect between sex and arm (P < .001, h p 2 ¼ 0.003). Pairwise comparisons revealed that males had greater external rotation than females for right arms, but there was no difference for left arms (Table B1-2). Both males and females had significantly greater external rotation in their right arm (Table B1-3), with a greater difference in males than in females (4.6 vs. 2.8 ). Internal rotation There was statistically greater (P < .001) internal rotation in the left shoulder than right shoulder (Table B2-1), and the effect size was small (h p 2 ¼ 0.027). There was also a statistically significant interaction but trivial effect between age and sex (P ¼ .001, h p 2 ¼ 0.003). Among males, there were statistically significant pairwise comparisons between age groups, but with inconsistency regarding whether the older or younger group in these comparisons had higher value (Table B2-2). In contrast, among females, no age group effect was found (Table B2-3). For the 3 youngest age groups, females had greater internal rotation than males; however, there was no such difference found in the other 4 age groups (Table B2-4). Flexion There was a statistically significant but trivial interaction effect between age and arm (P < .001, h p 2 ¼ 0.007). In both the right and left arms, there were numerous differences in pairwise comparisons between age groups (Tables B3-1 and B3-2), with the younger group producing greater flexion than the older group. For the 2 youngest age groups, the left arm had greater flexion than the right arm (Table B3-3). For the next 3 age groups, the right arm had greater flexion than the left arm. For the 2 oldest age groups, there was no difference found between the arms. Differences between males and females were not statistically significant (P ¼ .13). Extension There was a statistically significant but trivial interaction effect between age and sex (P ¼ .02, h p 2 ¼ 0.002). Among the 21 age group pairwise comparisons within males, only 3 revealed statistically significant differences (Table B4-1). Among females, no age group effect was found (Table B4-2). Although there was no statistically significant difference between sexes for the 2 oldest age groups, females had greater extension than males for the 5 younger age groups (Table B4-3). Male 1129 1321 748 538 342 116 33 4227 Female 778 772 311 250 198 77 22 2408 Total 1907 2093 1059 788 540 193 55 6635 Figure 1 Maximum shoulder angle vs. age. Blue lines correspond to males, whereas pink lines correspond to females. Solid lines correspond to right arms, whereas dashed lines correspond to left arms. Error bars denote 95% confidence intervals. There was also a statistically significant but trivial interaction effect between age and arm (P < .001, h p 2 ¼ 0.004). For the right arm, no age group effect was found (Table B4-4). Among the 21 age group pairwise comparisons for the left arm, only 3 revealed statistically significant differences (Table B4-5). Finally, although there was no significant difference between arms for the oldest age group, the right arm had greater extension for the other 6 age groups (Table B4-6). Abduction Overall, there was a trend in which abduction was greater for females than males and greater for the left arm than right arm. Abduction was greatest in the 2 youngest groups. The age  sex  arm interaction effect was statistically significant but trivial (P ¼ .01, h p 2 ¼ 0.003), with numerous statistical differences in pairwise comparisons (Tables B5-1 through B5-6). Horizontal abduction Overall, there was a trend in which horizontal abduction was greater for females than males. There was a statistically significant but trivial interaction effect between sex and age (P < .001, h p 2 ¼ 0.005). Among males, horizontal abduction did not change with age (Table B6-1). In contrast, among females, horizontal abduction was greatest in the 2 youngest age groups (Tables B6-2 and B6-3). There was also a statistically significant but trivial interaction effect between age and arm (P < .001, h p 2 ¼ 0.006). For both the right and left arms, horizontal abduction was greatest in the 2 youngest age groups (Tables B6-4 and B6-5). Finally, there was large inconsistency among age groups regarding which arm had greater maximum horizontal abduction angle (Table B6-6). Discussion Our hypothesis that motion decreases with age was partially supported. External rotation and flexion decreased significantly with age. Abduction decreased in males from the teens into the 30s, whereas horizontal abduction decreased in females from the teens into the 30s. These results are consistent with previous studies showing decrease in shoulder active range of motion with age. 2,11 Previous studies of passive range of motion reported decreased shoulder motion with age, 8,13 whereas age-related changes for other joints (knee, hip, and elbow) were not significant. 13 As hypothesized, there were several significant differences between males and females. Under age 40 years, external rotation of the right shoulder was greater for males than females; however, there were no significant differences in external rotation of the left shoulder. This contradicts previous studies, showing greater external rotation in females. 2,11 Several other parameters in the present study were greater for females than males. Females consistently had greater abduction than their male counterparts. This is consistent with the findings of Barnes et al, 2 whereas Gill et al reported greater abduction for males than females. 11 Under 60 years of age, extension and horizontal abduction were greater for females than males. Under 30 years of age, internal rotation was greater for females than males. Our third hypothesis was also supported by the data, as there were significant differences between left and right shoulder range of motion. Although arm dominance was not recorded with data collection, it is reasonable to assume that the vast majority of subjects were righthanded, as approximately 90% of the general population is righthanded. In the study by Gill et al, 11% of participants were left-handed, and shoulder range of motion differences between right and left arms mirrored differences between dominant and nondominant arms. 11 Thus, differences between right and left shoulder range of motion in the present study likely reflect differences between dominant and nondominant shoulders, respectively. External rotation was greater in the right arm, whereas internal rotation was greater in the left arm. Similar discrepancies are well documented between dominant and nondominant shoulder passive ranges of motion, especially with the throwing and nonthrowing arms of athletes. 19,23,24 The increase in external rotation and decrease in internal rotation in the dominant arm are associated with glenoid and humeral retrotorsion, as first shown by Crockett et al. 6 Wilk et al introduced the concept of total range of motion, defined as the summation of external rotation and internal rotation. Wilk et al have shown that although the dominant arm has greater external rotation and less internal rotation than the nondominant arm, the total passive range of motion of the 2 shoulders is about the same. 19,23,24 Combining external rotation and internal rotation data from the present study, a post-hoc analysis of total rotation active range of motion was produced (Fig. 2). There was a significant interaction among arm, sex, and age (P ¼ .03). While total active range of motion tended to be slightly greater for the right arm, this difference reached statistical significance for fewer than half of the age/sex subgroups (Table B7-6). Differences in extension and abduction were also shown. Extension was greater in the right shoulder. For male subjects, abduction was greater in the left shoulder. Gill et al reported greater abduction in males than females, 11 whereas Barnes et al found greater abduction in females than males. 2 As with all studies, this investigation had limitations. This study used data from a large, heterogenous sample. Although this enabled us to establish normative data for active range of motion, future research may focus on specific groups based on recreational activities, work, and lifestyle, such as athletes, workers, or injured people. Another limitation was the omission of arm dominance. The present study found significant differences between ranges of motion of the left shoulder and right shoulder because left and right correlated to nondominant and dominant, respectively, for most people. However, knowing the arm dominance of individual would have strengthened the statistical differences even further. Statistically, the large number of comparisons analyzed created a high chance of Type I error; thus, our interpretation focused on clusters of multiple differences observed. Furthermore, this study assessed the active range of motion without controlling for motion compensations or abnormal movements. Conclusion In conclusion, active range of motion of the shoulder decreases with age. The profiles of these decreases differ across specific shoulder motions. Differences were also demonstrated between sex (male/female) and arm side (right/left), although their effects were less pronounced than that of age. It is also important to note that there are numerous interactions in range of motion measurements between age, sex, and arm. This study should enhance the clinician's appreciation and recognition of shoulder active range of motion during their clinical examinations and help establish personalized goals. Disclaimers: Funding: No funding was disclosed by the authors. Conflicts of interest: Glenn S. Fleisig is a consultant for Dari Motion. The other authors, their immediate families, and any research foundation with which they are affiliated have not received any financial payments or other benefits from any commercial entity related to the subject of this article.
v3-fos-license
2021-05-22T00:02:51.032Z
2021-01-01T00:00:00.000
234940660
{ "extfieldsofstudy": [ "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-res.com/articles/feature/b030p085.pdf", "pdf_hash": "48a77ef03596b641be821d7cbe08752aeb228dde", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:65", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "773512c749b7bccd8b81dca3d25acd45193b94eb", "year": 2021 }
pes2o/s2orc
Impaired larval development at low salinities could limit the spread of the non-native crab Hemigrapsus takanoi in the Baltic Sea Publisher: Inter-Research · www.int-res.com *Corresponding author: [email protected] ABSTRACT: The Asian shore crab Hemigrapsus takanoi, native to the northwest Pacific Ocean, was re cently discovered in Kiel Fjord (southwestern Bal tic Sea). In laboratory experiments, we tested the salinity tolerance of H. takanoi across 8 levels (0 to 35) and across 3 life history stages (larvae, juveniles and adults) to assess its potential to invade the brackish Baltic Sea. Larval development at different salinities was monitored from hatching to the megalopa stage, while survival and feeding of juveniles and adults were assessed over 17 d. Larvae of H. taka noi were able to complete their development to megalopa at salinities ≥ 20 and the time needed after hatch to reach this stage did not differ between salinities of 20, 25, 30 and 35. At a salinity of 15, larvae still reached the last zoea stage (zoea V), but development to the megalopa stage was not completed. All juveniles and adults survived at salinities from 5 to 35. Feeding rates of juveniles in creased with increasing salinity across the entire salinity range. However, feeding rates of adults reached their maximum be tween salinities of 15 and 35. Our results indicate that both juveniles and adults of H. takanoi are eu ry haline and can tolerate a wide range of salinities, at least for the time period tested (2 wk). However, larval development was impaired at salinities lower than 20, which may prevent the spread of H. takanoi into the Baltic Proper. INTRODUCTION Introduction of non-native species is among the most serious threats to the conservation of biodiversity and ecosystem integrity worldwide (Occhipinti-Ambrogi & Galil 2010). Many factors currently facilitate © The authors 2021. Open Access under Creative Commons by Attribution Licence. Use, distribution and reproduction are unrestricted. Authors and original publication must be credited. Publisher: Inter-Research · www.int-res.com *Corresponding author: [email protected] ABSTRACT: The Asian shore crab Hemigrapsus takanoi, native to the northwest Pacific Ocean, was re cently discovered in Kiel Fjord (southwestern Baltic Sea). In laboratory experiments, we tested the salinity tolerance of H. takanoi across 8 levels (0 to 35) and across 3 life history stages (larvae, juveniles and adults) to assess its potential to invade the brackish Baltic Sea. Larval development at different salinities was monitored from hatching to the megalopa stage, while survival and feeding of juveniles and adults were assessed over 17 d. Larvae of H. taka noi were able to complete their development to megalopa at salinities ≥ 20 and the time needed after hatch to reach this stage did not differ between salinities of 20, 25, 30 and 35. At a salinity of 15, larvae still reached the last zoea stage (zoea V), but development to the megalopa stage was not completed. All juveniles and adults survived at salinities from 5 to 35. Feeding rates of juveniles in creased with increasing salinity across the entire salinity range. However, feeding rates of adults reached their maximum between salinities of 15 and 35. Our results indicate that both juveniles and adults of H. takanoi are eu ryhaline and can tolerate a wide range of salinities, at least for the time period tested (2 wk). However, larval development was impaired at salinities lower than 20, which may prevent the spread of H. takanoi into the Baltic Proper. the expansion of species beyond their natural habitats. Among them, the most important are the increasing globalization of international trade, tourism/ travel and human population growth (Canning-Clode 2016). Non-native species are considered invasive when they establish new populations, reproduce successfully, proliferate, and affect their new environment (Occhipinti-Ambrogi & Galil 2010). In order to establish self-sustaining populations in a new area, a species must pass through a series of successive steps (Lockwood & Somero 2011): (1) an establishment phase that is characterized by low abundance of the introduced species, (2) an expansion phase, which makes the introduced species a dominant component with strong effects on the invaded system and (3) an adjustment phase, which starts when the introduced species shows behavioural and evolutionary adaptation to the abiotic and biotic conditions of its new habitat (identified and described by Reise et al. 2006). European waters are severely affected by introduced species (Galil et al. 2014). The Baltic Sea is one of the largest brackish water bodies in the world and is home to 152 non-native and cryptogenic species (supplementary material of Tsiamis et al. 2019). The Baltic Sea is characterized by a strong salinity gradient (Leppäkoski et al. 2002). The surface water salinity ranges from ~2−4 in the northeast to 6−8 in the central Baltic Proper, 15−25 in the southwest and >18−33 in the Danish straits and the Kattegat (Krauss 2001). The generally strong environmental fluctuations in the Baltic Sea system, which are caused by natural conditions such as the salinity−temperature gradient, and which have facilitated pronounced impacts of human-mediated stressors (warming, de salination, overfishing and eutrophication), make the Baltic Sea extremely susceptible to invasions by new species (Paavola et al. 2005, Reusch et al. 2018. Invasive species often show high physiological tolerance of different environmental stressors, such as warming, desalination, reduced light availability, hypoxia and pollution, which is assumed to be an important prerequisite for invasion success (Lenz et al. 2011, Paiva et al. 2018. Salinity is one of the most important environmental factors controlling survival, reproduction and growth in aquatic organisms, as well as their geographical distribution, in particular in marginal seas (Anger 2003, Thomsen et al. 2018. Salinity is also the main factor that determines a species' invasion success in Baltic Sea habitats (Jaspers et al. 2011, Paiva et al. 2018). Tolerance to low salinities can therefore be used as a measure for assessing the potential of a species to invade the Baltic Sea area, and other comparable habitats (Hudson et al. 2018, Reusch et al. 2018. Even short-term fluctuations in salinity could impair successful planktonic larval development in many species (Anger 2003). For example, the low salinity in the central Baltic Sea is supposed to restrict the expansion potential of the in vasive comb jelly Mnemiopsis leidyi in the southeastern regions, since its reproductive ability de creases under low salinity conditions (Jaspers et al. 2011). Some invasive species expand their salinity tolerance window during the invasion process (Charman tier et al. 2002); others narrow their salinity range in their introduced compared to their native regions, and are therefore not able to thrive in brackish water habitats (Pauli & Briski 2018). Consequently, species with a broad salinity tolerance may have a higher probability of establishment in new habitats, particularly in variable environments with fluctuating salinities (Lenz et al. 2011). In addition to this broad species-specific tolerance (Paiva et al. 2018, Pauli & Briski 2018, tolerance to low salinities can differ between ontogenetic stages of the same species (Charmantier 1998, Anger et al. 2008. Em bryos and larvae of marine decapod crustaceans, for example, are often more sensitive to osmotic stress than juveniles and adults (Bas & Spivak 2000) be cause they lack specific ion regulatory epithelia such as gills (Charmantier et al. 1998). Therefore, adaptations to extreme salinity regimes at all life history stages are necessary for the successful establishment of a species in a brackish environment (Charmantier 1998). A crustacean recently introduced into the brackish Baltic Sea is the Asian brush-clawed shore crab Hemigrapsus takanoi (Asakura & Watanabe 2005). This species, which is native to the northwest Pacific Ocean, inhabits muddy and rocky shores as well as sheltered harbours and estuaries (Gollasch 1998), but can also be found on soft sediments in the subtidal (Asakura & Watanabe 2005). Adult H. takanoi from native regions are known to be euryhaline and tolerate a broad salinity range from 7 to 35 (Mingkid et al. 2006a). The first record of H. takanoi in Europe was in 1993 in the German North Sea, when several individuals were found on ship hulls in Bremerhaven Harbour (Gollasch 1998). The species subsequently ex tended its range and reached the coasts of France, Spain and Belgium, as well as the Dutch and German Wadden Sea (Gittenberger et al. 2010 and references therein). Today, it is also known from the western part of the English Channel, and from North Kent and Essex in Great Britain (Ashelby et al. 2017 and references therein). In the southwestern Baltic Sea (Kiel Fjord), H. takanoi was first recorded in summer 2014 (Geburzi et al. 2015). In the southwestern Baltic Sea, H. takanoi might alter the existing ecosystem (Geburzi et al. 2015) by competing with the 2 main benthic invertebrate predators, the starfish Asterias rubens and the shore crab Carcinus maenas, for the same food source, i.e. the blue mussel Mytilus edulis (Reusch & Chapman 1997). However, to predict whether H. takanoi can establish a stable population in the southwestern Baltic Sea, and whether it can spread further into the Baltic Proper, precise knowledge of its salinity tolerance across the full reproductive cycle is essential. Although only limited information is available for northern Europe (but see van den Brink et al. 2012), juveniles and adults of H. takanoi in the species' native range have been observed in waters with salinities as low as 7 to 9 (Mingkid et al. 2006a, Gittenberger et al. 2010. This is surprising, since laboratory experiments with individuals from Tokyo Bay only showed successful development from megalopa to juvenile at salinities ≥ 25 (Mingkid et al. 2006b). The present study determined the influence of a wide range of salinities on survival and feeding in different life history stages of H. takanoi. We monitored the developmental success of larvae to the mega lopa stage and assessed food intake of juveniles and adults. Individuals from a population that was discovered in 2014 in Kiel Fjord (salinity around 15) were exposed to 8 levels of salinity, from freshwater (0) to fully marine conditions (35) to elucidate the species' potential to establish a stable population in this region, and to further invade the Baltic Proper. In addition, a salinity dataset, collected in Kiel Fjord over 13 yr, was used to (1) assess salinity ranges and variability in this region, (2) correlate salinity with larval development and juvenile settlement success in Kiel Fjord and (3) predict the dispersal capacity of H. takanoi towards the Baltic Proper. Survival and development of larvae Ovigerous female Hemigrapsus takanoi, which usually bury themselves in the muddy bottom of the fjord or hide in mussel Mytilus edulis beds (Geburzi et al. 2015, Nour et al. 2020, were collected by scraping mussel aggregates from pilings and from the bottom of the inner Kiel Fjord (54°32.9' N; 10°14.8' E) be tween mid-July and early August 2017. To identify females with eggs close to hatching, a small sample of eggs (20−30) was isolated carefully from each individual female using a fine sterilized tweezer. The samples were then investigated under a dissecting stereo microscope (Olympus SZ51). Four females with carapace widths of 17.3 ± 0.8 mm (mean ± SD), which had broods with almost 10% yolk, prominent eye spots, and fully developed zoeae close to hatch (van den Brink et al. 2013), were used to study larval development under different salinity regimes. Each female was reared in an individual 2 l plastic aquarium filled with aerated seawater of the same salinity as the Kiel Fjord at the time of collection (15−16.7), at a temperature of 20 ± 0.5°C and a photoperiod of 12 h: 12 h (light:dark). Water was exchanged daily and females were offered crushed mussels ad libitum twice per week until larval hatching. The duration from capturing the female crabs to the hatching of embryos never exceeded 2 wk. Eight salinity levels (0,5,10,15,20,25,30 and 35) were adjusted using filtered (0.2 μm millipore) and aerated North Sea water (salinity ~33). Salinities between 5 and 30 were obtained by diluting the seawater with appropriate amounts of deionized tap water (Epifanio et al. 1998, Bas & Spivak 2000, while for the highest salinity (35), artificial sea salt (SeequaSal) was added. Although diluting seawater may lead to a reduction in nutrient concentrations mainly at low salinities, this is a common method used in salinity assays on Brachyura (e.g. Charmantier et al. 2002, Anger et al. 2008) and should not have an effect on the crabs. For freshwater conditions (salinity = 0), we used deionized tap water. For salinities from 5 to 30, the seawater pH was slightly reduced due to the decreased bicarbonate levels of the added deionized water. We adjusted pH NBS (pH at the NBS scale) to a value of ~7.9, resembling Kiel Fjord's pH NBS during crab collection (see also Thomsen et al. 2015 for Kiel Fjord carbonate chemistry), using 1M sodium bicarbonate solution (Thomsen et al. 2018). pH NBS values were measured with a WTW 3310 pH-meter equipped with a SenTix 81 pH electrode. Salinity was adjusted using a WTW Cond 3110 equipped with a Tetracon 325 electrode. Immediately after hatching, actively swimming zoea I larvae were transferred into individual 100 ml culture vials filled with 80 ml of seawater using widebore pipettes. For each salinity, ten 100 ml vials, each containing an initial number of 10 zoea I larvae, were used. This setup was repeated for each of the 4 female crabs. In total, the experiment included 320 vials (8 salinities × 4 females × 10 vials). Larvae were cultured from zoea I to the megalopa stage without aeration, but with open lids (de Jesus de Brito Simith et al. 2012) at 20 ± 0.5°C, pH 7.9 ± 0.1 and a photoperiod of 12 h:12 h (light:dark). The water in each of the vials was exchanged daily. On these occasions, larvae were checked for mortality, as well as for moult, and were provided freshly hatched (<100 μm) brine shrimp Artemia spp. nauplii (Great Salt Lake Brand), at a density of ~10 nauplii per ml seawater (Anger et al. 2008). To avoid changes in experimental salinities, Artemia spp. were rinsed with water of the same salinity level prior to being added to culture vials (de Jesus de Brito Simith et al. 2012). Moulting was checked by inspecting the vials for exuviae (for zoeae II to V) and was confirmed by microscopy (for zoeae IV and V) based on morphology (after Landeira et al. 2019;see Fig. S1a,b in the Supplement at www. int-res. com/ articles/ suppl/ b030 p085 _ supp. pdf). Megalopae could be ob served without microscopic inspection due to their large size and distinct morphology (Fig. S1c). The experiment continued until the last zoea larva at each salinity level had either died or moulted into a megalopa. Overall developmental success was determined by counting those larvae that successfully moulted into megalopae in each vial. The median time (in d) needed by larvae for development from hatching to the megalopa stage at each salinity level was used as an indicator of larval development duration. Survival and feeding rates of juveniles and adults Juvenile (5.8−6.9 mm carapace width) and adult female (16.7−23.2 mm carapace width; Geburzi et al. 2018) H. takanoi were collected in November 2018 as described above. They were kept in the laboratory under seawater conditions present in Kiel Fjord at the time of collection (20 ± 0.5), at 12 ± 0.5°C and a photoperiod of 10 h:14 h (light:dark). All collected individuals had a hard carapace (i.e. they were in the intermoult stage). Water was exchanged daily and crabs were fed crushed mussels ad libitum. In addition to mussels, frozen amphipods Gammarus spp. and shrimps Palaemon spp. were provided for juveniles. Salinity levels (0, 5, 10, 15, 20, 25, 30 and 35) were adjusted as de scribed above. The different salinity treatments were in 60 l tanks and were well-aerated. Two weeks after acclimation to laboratory conditions, a total of 40 juveniles (n = 5 crabs per salinity level) and 24 adults (n = 3 crabs per salinity level) with intact appendages were selected for the experiment. Only 3 adult crabs were used at each treatment level due to the lack of availability of adult crabs in the field at the time of the experiment. Juveniles and adults had carapace widths of 6.45 ± 0.32 (mean ± SD) and 19.98 ± 1.65 mm, respectively. Each juvenile was placed separately in a 1 l Kautex bottle (filled with 0.5 l water), while each adult was placed separately in a 2 l plastic aquarium (filled with 1 l water). Seawater conditions in the aquaria were changed from 20 ± 0.5 (acclimation conditions) to the various target salinities in a stepwise manner at 1.25 salinity units per hour to allow for acclimation. This rate is sufficient for decapod juveniles to reach a constant haemolymph concentration (Charmantier et al. 2002), while the adults of the genus Hemigrapsus spp. are known to cope with low/and or fluctuating salinities without any acclimation (Urzúa & Urbina 2017, Hudson et al. 2018. After target salinities were reached, the experiments were started and lasted for 17 d. In a pilot study, we observed that juvenile crabs cannot open living mussels by themselves. However, to ensure a comparable experimental set-up with the same food source as used in the experiments with adults (see below), 3 opened mussels (11.0−11.9 mm) were offered to each individual juvenile crab every other day. Before feeding, all faecal particles were manually removed using disposable transfer pipettes, while the seawater, including remaining mussel shells and soft tissue residues, was filtered through pre-weighed tea bags. Afterwards, the wet filters, including mussel shells and soft tissue residues, were washed quickly with distilled water to eliminate adhesive salt and placed on pre-weighed and numbered aluminium trays. The filters were then dried in a furnace at 60°C for 48 h and dry weight was quantified to the nearest 0.001 g. We assessed food consumption rates of juvenile crabs by quantifying the dry weight (g) of mussel tissue consumed. The dry weight of the mussels offered at the beginning of each trial was estimated by calculating the average dry weight (g) of 21 mussels with a size similar to those used during the feeding trial (11.0− 11.9 mm). This was done by placing each of the 21 mussels (including shell and tissues) on a numbered and pre-weighed aluminium tray. The trays with mussels were dried at 60°C for 48 h and weighed. The dry weights of the soft tissue and shell were measured separately. The mean dry soft tissue of the 21 mussels was 0.0080 ± 0.0004 g (mean ± SD), while the mean dry shell weight was 0.0930 ± 0.0170 g. Consumption rates were calculated as the difference in weight between the mean of the 21 mussels and the mean leftover of the 3 offered mussels during each feeding trial. Adults of H. takanoi were offered 20 closed mussels ranging from 6 to 9 mm (Nour et al. 2020). Every other day, all mussels were collected, all opened mussels were counted, and 20 new mussels were added. Throughout the experiment, water was exchanged every other day, while the survival rate (%) was monitored daily over the 17 d experimental period. Lack of movement even after stimulation with a glass rod, absence of beating activity of scaphognathites (i.e. lack of gill chamber ventilation) and abdomen opening and relaxation (Novo et al. 2005) were considered as signs that an individual had died. Food was offered every other day, while feeding rates were recorded over the entire experimental period. They were assessed as mean mussel dry weight (g) consumed by juveniles, and as the mean number of mussels opened and consumed in the case of adults. Kiel Fjord salinity measurements Salinity in Kiel Fjord was recorded on weekly cruises of the research vessel FK Polarfuchs from 2005 to 2018 using a CTD48M probe (Sea and Sun Techno logy) at a station in front of the GEOMAR pier (Wittlingskuhle, position: 54°20.15' N, 10°9.1' E). Statistical analysis All analyses and statistical procedures were performed using the statistical software R (version 3.6.0, R Development Core Team 2016). To test whether survival rates differed between salinities, the log-rank test, in cases of proportional hazards, or the Peto-Wilcoxon test for non-proportional hazards were used. Survival rates were visualized by Kaplan-Meier curves. The Kaplen-Meier method is a nonparametric estimator of survival that incorporates incomplete (censored) observations, such as those cases where larvae or crab had not died by the end of the examined period. Pairwise comparisons of survival rates were done for relevant survival and salinity combinations and p-values were adjusted for multiple testing using the Benjamini-Hochberg (BH) correction. Analyses were performed using the survival and survminer packages in R (Therneau 2015, Kassambara et al. 2019). Furthermore, we checked for maternal effects on developmental success with a generalized linear mixed model that included identity of the female crabs as a random factor. Since this model revealed that the contribution of the female crabs to the unexplained variation in the data was marginal (less than 1%), we concluded that there were no maternal effects and excluded this random factor from further modelling. We then used a linear model (LM) by fitting a 2 nd degree polynomial to test whether salinity significantly affected the number of larvae reaching the megalopa stage, time to moult, and feeding rates of both juveniles and adults. The choice of model was based on our expectation that the response variables would, within the range of salinities examined in the experiment, be a unimodal function of salinity. We re frained from fitting alternative models post hoc, i.e. after having collected the data. Since larvae reached the megalopa stage exclusively in salinities from 20 to 35, we only included these 4 levels and salinity 15, i.e. the highest salinity at which the megalopa stage was not reached, in the analyses. Since both juveniles and adults died at a salinity of 0, we obtained no feeding rates for this level. We checked whether the assumptions of the LM were met by inspecting residual plots. In addition, for larval development, the salinities at which response variables were maximized (i.e. larval survival) or minimized (i.e. time needed to reach the megalopa stage) were identified with polynomial curves fitted to the data. This was done for all cases in which salinity significantly affected the examined response variables. Furthermore, we verified whether the optimum point (maximum or minimum) of a given response variable occurred in the range of salinity levels (0−35) used in this study by applying the general quadratic equation as follows (Koram 2019): where a and b are the slopes of x 2 and x, respectively, and c is the intercept of the regression line. We then determined the optimum point as follows: x-coordinate of the vertex (optimum point) = -b/2a (2) 3. RESULTS Development of larvae to the megalopa stage Although survival rates did not differ between experimental groups in the salinity range between 20 and 35 (pairwise Peto and Peto tests; Table S1), the percentage of larvae that reached the megalopa stage within 50 d after hatching significantly changed with salinity (2 nd degree polynomial regression: F (2,197) = 124, p ≤ 0.001, R 2 = 0.55). The highest proportion of larvae successfully developing into megalopae was observed at salinities of 25 and 30 (median proportion was 50 and 40%, re spectively; Table S2). In this regard, the optimum salinity was ~26.5. At salinity 0, zoea I suffered from abrupt mortality during the first 24 h and no larva moulted to the next stage (Figs. 3 & S2a). At salinity 5, 4.7 ± 3% (mean ± SD) of larvae moulted to zoea II, but none of them reached the zoea III stage (Figs. 3 & S2b−c). At salinity 10, larvae could not moult beyond zoea III, while they reached zoea V at salinity 15 (Figs. 3 & S2b−d). Feeding rates of juveniles and adults There were significant effects of salinity on feeding rates in juveniles and adults of H. takanoi. Feeding rates in juveniles increased significantly with in creasing salinity (2 nd degree polynomial regression: F (2, 32) = 30, p ≤ 0.001, R 2 = 0.65; Fig. 4a). In adults, food intake also increased between salinities 5 and 15, but then reached a plateau and remained constant across the remaining salinity levels (2 nd degree polynomial regression: F (2,18) = 27, p ≤ 0.001, R 2 = 0.75; Fig. 4b). Prevailing salinity regimes in Kiel Fjord Median (mean) salinity in Kiel Fjord over the entire experiment was 16.8 (17.0), ranging between 8.4 and 24.7 (Fig. 5). Salinity was 15.7 (15.9) in waters up to 1 m below surface, 16.5 (16.9) at 7 m depth and 18.1 (18.4) at 18 m depth. Salinity varied among seasons and was lowest in spring and summer when larvae of H. takanoi hatch in Kiel Fjord, and high-91 Fig. 3. Influence of salinity (0−35, a−h) on the percentage of H. takanoi larvae reaching the next larval stage during the experiment. Percentage of zoea I that reached zoea II are based on the initial 10 larvae per vial. For the remaining observations (from zoea II to the megalopa stage), percentages are based on the number of individuals that reached the previous stage. The circles correspond to the replicates (n = 4 females used) and shaded areas define the 95% confidence intervals. Data cover the time until the last zoea V had moulted to the megalopa stage or died. Orange dots (megalopa stage) along the x-axis hide all zoea stages (I to V) that are still under development to the next stage or died est in autumn and winter when juveniles are found (Fig. 5d). DISCUSSION Our experiments show that the larval, juvenile and adult stages of Hemigrapsus takanoi, a species that recently arrived in the southwestern Baltic Sea, are affected differently by salinity. Although larvae of this crab species were more sensitive to low-salinity stress than adults, they successfully developed to the megalopa stage as long as salinities were equal to or above 20. Neither the duration of development to the megalopa nor the probability of reaching this stage differed across higher salinities (≥ 20); however, survival and success in development were highest at a salinity of 25. In contrast to larvae, juveniles and adults were more robust to reduced salinities and showed 100% survival in response to salinities between 5 and 35. Feeding rates, however, also differed between salinity levels. Generally, feeding increased with increasing salinity. In adults, however, feeding rates remained constant at salinities ≥ 15. In general, increasing salinity favoured larval development of H. takanoi from Kiel Fjord. All larvae died before reaching the megalopa stage when they were reared at salinities below 20. In contrast to our findings, Mingkid et al. (2006b), who tested a salinity gradient from 5 to 35 (5 salinity unit steps) in Tokyo Bay, showed that 1.4 and 30% of H. takanoi larvae successfully moulted into megalopae at salinities as low as 10 and 15, respectively. The discrepancy in the lower salinity thresholds of the present study (≥ 20) and Mingkid et al. (2006b) (≥ 15) suggests that H. takanoi larvae have a narrower salinity range in their new habitat than in their native range. Supporting this, Kiel Fjord larvae survived and developed best at salinities between 25 and 30. In contrast, Mingkid et al. (2006b) demonstrated the highest percentage of larval development to megalopae at a salinity of 20. Seemingly, Kiel Fjord H. takanoi larvae prefer higher salinities than larvae from a native-range population (Mingkid et al. 2006a) and a similar pattern was observed for gammarid crustaceans (Pauli & Briski 2018). Between-population differences in salinity tolerance have also been shown for other crustacean taxa (Ojaveer et al. 2007). For example, Car cinus maenas from the Baltic Sea have a higher capa city to hyper-osmoregulate than individuals from a North Sea population (Theede 1969). This potential shift in salinity preference might be the result of adaptations at the population level to different local environmental conditions after arrival in a new habitat, as shown in a study on 22 populations of 8 gammarid species from different regions (Paiva et al. 2018), or may have occurred during the invasion process. A recent study by Landeira et al. (2020) showed that survival of newly hatched H. takanoi zoea I larvae from Tokyo Bay decreases with de creasing salinity. In addition, a slowing of their swimming speed, accompanied by random swimming trajectories, was observed at salinities below 20. This suggests that low salinities not only restrict invasiveness, but also limit the local dispersal capacity of H. takanoi. Within Kiel Fjord, H. takanoi must tolerate salinities between 8 and 25, while salinities range between 7 and 35 in the native range of H. takanoi, e.g. Tokyo Bay (Mingkid et al. 2006a). The salinity tolerance thresholds and performance maxima at salini- The approximate time range of occurrence of larvae, juveniles and adults is indicated by arrows (above the plot) tion from the Kattegat due to strong westerly winds pushing highly saline waters into the Baltic Sea (Lehmann & Javidpour 2010). It is generally thought that the European populations of H. takanoi arose from multiple independent introductions into the French Atlantic, French British Channel, and Dutch Wadden Sea (Markert et al. 2014). In addition, the North Sea populations might trace back to a secondary introduction of individuals from a donor population already adapted to marine conditions (Makino et al. 2018, Geburzi et al. 2020. Furthermore, the North Sea populations could then have adapted themselves to fully marine conditions for > 25 yr, while the Kiel Fjord population presumably only had ~5 yr to adapt to local environmental conditions. However, detailed studies on the osmoregulation capacity of H. takanoi larvae from populations along the invasion gradient would be needed to confirm this hypothesis. We found larval duration to be only marginally impacted by osmotic stress at salinities of 20 and higher. This has also been found in other crab species such as the mangrove crab Ucides cordatus from the Caeté Estuary Northern Brazil (Diele & Simith 2006) and the mud crab Scylla olivacea from estuaries and creeks in Thailand (Jantrarotai et al. 2002). Kiel Fjord H. takanoi larvae in the present experiment developed generally slower (larval duration of 35−37 d) than individuals of the same species from its native range (~16−21 d; Mingkid et al. 2006b). Diele & Simith (2006) suggest that variation in the time needed for development in U. cordatus could be due to diet. In our study, we fed larvae with Artemia sp. nauplii only, while Mingkid et al. (2006b) provided rotifers for food until the zoea III stage was reached, after which a mixture of Artemia sp. nauplii and rotifers was offered. However, differences between the 2 studies could also be explained by differences in temperature during incubation of the larvae, i.e. 20°C herein and 24°C in Mingkid et al. (2006b) (see also Epifanio et al. 1998, Spitzner et al. 2019 for temperature ef fects on crustacean larval development). In contrast to larvae, juveniles and adults of Kiel Fjord H. takanoi tolerated a wide range of salinities from 5 to 35, at least for the 2 wk these individuals were exposed to the stress gradient. In contrast, freshwater conditions led to 100% mortality in juveniles and adults within 2 and 4 d, respectively. Shinji et al. (2009) exposed adult H. takanoi from the native range (Tokyo Bay) to salinities from 0 to 40 for 8 h to study their immediate physiological responses. Although no mortality was observed over the short incubation period at any salinity, oxygen consumption and ammonia excretion levels were remarkably increased at a salinity of 0, while no significant differences were detected among all other salinity levels (Shinji et al. 2009). Data from the congeneric H. sanguineus showed a higher survival probability at a salinity of 5 compared to a salinity of 1, mainly explained by the ability to maintain internal haemolymph osmolality under osmotic stress at a salinity of 5 for several days (Hudson et al. 2018). Since osmoregulation is an energetically demanding process (Anger 1991), future research is required to study the consequences of osmotic stress on H. takanoi in the long-term. Feeding rates in both juvenile and adult H. takanoi were strongly influenced by salinity, yet in different ways. In juveniles, feeding decreased linearly with decreasing salinity. These findings are in line with observations made of juvenile crabs of the species Portunus trituberculatus (Shentu et al. 2015). Feeding rates in adults, however, were not affected across the salinity range from 15 to 35. Contrary to our results, Urbina et al. (2010) showed that feeding rates in adult H. crenulatus decreased with increasing salinities. They interpreted this as a compensatory mechanism to face the increased energy costs that emerged from excretion and respiration during osmoregulation activity. In general, a decrease in feeding rates as a reaction to environmental stress has been observed in different marine invertebrate taxa, such as cnidarians (Podbielski et al. 2016), crustaceans (Appelhans et al. 2012), molluscs (Zhang et al. 2016 and echinoderms (Stumpp et al. 2012). In decapod crustaceans, moulting is an energy consuming process, accompanied by an increase in metabolic rate (Skinner 1962). The process is modulated by many environmental factors like temperature, photoperiod and salinity, and it involves many physiological changes in the crabs themselves (Urzúa & Urbina 2017). The results presented here do not reflect any potential effects of moulting as a response to the salinities we applied, since all individuals were intermoult crabs. While adults of H. takanoi tolerated the salinity range of 5 to 35, our results suggest a clear bottleneck in larval development. The critical threshold for H. takanoi larvae to complete their full development at 20°C (common summer Kiel Fjord temperature) was found to be between 15 and 20, while in Kiel Fjord (southwestern Baltic Sea) salinities range between 8 and 25 (Fig. 5). The salinity regimes recorded at different water depths between 2005 and 2018 would therefore allow juveniles and adults of H. takanoi to thrive. In 2017, ovigerous females appeared in late June, at salinities higher than 15, and larval development probably continued into September and October, when salinities were still above 17 (Nour et al. 2020). In line with this, large numbers of juveniles were observed in November 2018, when salinities were around 19 (O. M. Nour pers. obs.). Between 2005 and 2018, the median salinity (surface to a depth of 7 m) in Kiel Fjord during the reproductive season, when hatching and development of larvae takes place (June to October), ranged from 8.4 to 22 (Fig. 5). This indicates that Kiel Fjord salinity conditions can allow successful larval development, at least during years with higher salinities. However, to be able to identify years in which successful larval development can occur more reliably, experiments investigating the salinity range between 15 and 20 are necessary. Salinity thresholds for larval development have been observed in other species from Kiel Fjord (Eriocheir sinensis: Anger 1991, C. maenas: Cieluch et al. 2004, Asterias rubens: Casties et al. 2015. Hence, interannual salinity fluctuations recorded in Kiel Fjord might play a crucial role in stabilizing the population dynamics of its keystone species (Casties et al. 2015). A decline in salinity below a critical threshold during the spawning season of A. rubens, for example, might inhibit larval development (Casties et al. 2015). We do not have sufficient data about the structure of the C. maenas populations in the Baltic Sea and about seasonality in their reproduction to say whether salinity could also play a role in C. maenas larval development. However, it is known from other European coasts that there are 2 main peaks in egg production of C. maenas (early summer and late autumn; Torres et al. 2020), and larvae require salinities > 15 to complete their development (Anger et al. 1998). If this is also the case for the Baltic Sea population, it is plausible to assume that even with the expected desalination of the Baltic Sea (Gräwe et al. 2013), larvae from autumn embryos (mean salinity > 15; Fig. 5) can maintain the population. In contrast, H. takanoi in the Baltic Sea shows only one peak in egg production and that is in summer (Nour et al. 2020). It is therefore likely that a failure in recruitment will occur under the future Baltic Sea conditions, which consequently could cause the extinction of H. takanoi populations. H. takanoi larvae are planktonic and likely move towards the water surface immediately after hatching (Landeira et al. 2019). In the southwestern Baltic Sea, surface seawater is generally less saline than deeper waters (Fig. 5), meaning Kiel Fjord H. takanoi larvae are facing a dilemma. On one hand, surface waters show a higher phytoplankton abundance and consequently also a higher abundance of small zoo-plankton organisms (Anger 2003), which could serve as a food source for larvae. On the other hand, the salinity regimes in surface waters potentially restrict successful development. However, it remains to be investigated if H. takanoi larvae can meet their salinity and feeding requirements by actively migrating through the water column. The findings of the present study suggest that H. takanoi populations are limited in their distribution to the southwestern Baltic Sea, mainly due to their larval salinity sensitivity. Future monitoring of the spatial and temporal distribution of H. takanoi larvae along the salinity gradient from the southwestern Baltic Sea into the Baltic Proper will reveal whether this assumption proves correct. Whether the prolongation of larval development (~35−37 d) observed in this study, in comparison to the time needed for larval development in native habitats (~16−21 d; Mingkid et al. 2006b), is an advantage for H. takanoi dispersal and population establishment in the further northern and eastern Baltic Sea is unclear. Longer larval development can have important consequences for recruitment success (Sulkin & McKeen 1994) by increasing natural mortality risk due to factors such as predation (Morgan 1995). Nevertheless, a longer developmental period increases the probability that larvae will spread and reach appropriate substrates prior to metamorphosis (Jackson & Strathmann 1981), which could allow the species to colonize new areas and would enhance gene flow among local populations (Díaz & Bevilacqua 1986). Numerous studies have revealed that larval ability to cope with salinities outside the optimum range is also strongly influenced by temperature (Anger 1991, Epifanio et al. 1998). In nature, environmental stresses such as warming and desalination do not usually occur in isolation and can therefore interactively affect organisms (Epifanio et al. 1998). Hence, studying the combined effects of these 2 factors is important to assess the response of H. takanoi populations to the future abiotic changes expected to occur in the Baltic Sea. The fact that female brachyuran crabs brood externally may allow their embryos to acclimate to low salinities during embryogenesis and this could potentially increase their ability to tolerate low salinities after hatching (Giménez & Torres 2002). In our larvae experiment, we collected all ovigerous crabs from the same locality in Kiel Fjord and they were, prior to experiments, kept in the laboratory at the same salinity that prevailed in the fjord (~20). It is possible that larvae from regions with even lower salinities would show higher survival rates than those observed in our experiments. However, in H. crenulatus, embryogenesis is not completed at a salinity of 5 because females allocate energy to osmoregulation and therefore produce less egg yolk (Urzúa et al. 2018). Similar salinity-related constraints on embryo development have been observed in C. maenas. Even though C. maenas from North Sea populations completed their larval development at a salinity of 20 under laboratory conditions when ovigerous females were kept at a salinity of ~32 (Spitzner et al. 2019), females discarded their eggs when they were incubated at salinity of 20 (G. Torres unpubl. data, in Torres et al. 2020). With larvae of C. maenas, negative effects of low salinities on survival and development time were mitigated by high temperatures (Spitzner et al. 2019). However, Torres et al. (2020) found no evidence of embryonic acclimation to low salinities in C. maenas from North Wales, UK, while, in contrast to this, acclimation to low salinities during embryogenesis limited the thermal mitigation of low salinity stress for zoea I larvae. This thermal mitigate limitation could be related to the depletion of energy reserves during embryo development, which occurred when ovigerous females were exposed to low salinities (Giménez & Anger 2001). It resulted in decreased larval survival and reduced development rates (Torres et al. 2020). Low salinity in the maternal habitat of H. takanoi might produce larvae that perform sub-optimal swimming and have lowered survival rates (Landeira et al. 2020). Given these possible maternal ef fects, the best offspring performance might occur when conditions in maternal and offspring environments are similar (Uller et al. 2013). Hence, assessing the response of H. takanoi offspring to salinity stress while considering possible maternal effects will re quire further studies. Since juvenile and adult stages of H. takanoi have the potential to invade the central Baltic Proper, which has bottom salinities of 8 to 6 (HELCOM 2018), the entire reproductive cycle of H. takanoi may take place in this lower salinity region if the species manages to locally adapt to these conditions. However, physiological responses of this species to salinity stress still need to be investigated in studies that cover a longer period of its lifetime. CONCLUSIONS Previous studies have revealed that non-native species are generally tolerant of a wide range of salinities and tend to be more resistant to stressful conditions than native species (Lenz et al. 2011, Paiva et al. 2018). Herein, we showed a strong influence of salinity on the earliest stages of the recent invader H. taka noi. The development of H. takanoi larvae was only completed in salinities between 20 and 35, which contrasts with the salinity tolerance of larvae from the native range of this species (Mingkid et al. 2006b). Reasons for this discrepancy are unde termined, but could potentially relate to the previous adaptation of H. takanoi populations established in the southwestern Baltic Sea to fully marine conditions during a secondary spread in the invasion process. North Sea habitats, which are fully marine, could have served as stepping stones prior to the introduction of this species into Kiel Fjord (Geburzi et al. 2020). Juveniles and adults can tolerate salinities as low as 5, but their feeding rates declined at low salinities, with possible long-term implications for their fitness. Although field observations showed that the southwestern Baltic Sea is a suitable habitat for H. takanoi (Nour et al. 2020), salinity likely represents a hurdle for a further invasion into the central and eastern Baltic Sea, in particular for the larval stages of H. takanoi. According to future climate scenarios (Meier et al. 2012), which predict a reduction in salinity across the Baltic Sea by the end of the century, Kiel Fjord will most likely experience a decline in salinity of 2 units on average (Gräwe et al. 2013), which could lead to more frequent and longer phases during the year with salinities outside the tolerance boundaries of larval H. takanoi. The question re mains whether the species will be able to undergo full larval development in the future, if it cannot adapt to the changes predicted for this region.
v3-fos-license
2023-05-28T15:15:42.275Z
2023-05-26T00:00:00.000
258938065
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/bn/a/ZxTTxwKctMBv8SByc8MSQbF/?format=pdf&lang=en", "pdf_hash": "560fafa8be7a6c740a27fdedf4eff85989621d72", "pdf_src": "Dynamic", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:67", "s2fieldsofstudy": [ "Education" ], "sha1": "13229b5f8598781fa4af3adc42c6a34f9228354e", "year": 2023 }
pes2o/s2orc
A decade of Zoology Summer Course: impressions and impacts of the first university extension course on Zoology in Brazil Abstract Although the diversity of animal groups distributed in Brazil provides countless research opportunities, the current scenario does not follow this demand. The reasons for the disconnections range from inequality in the availability of resources for teaching and research to the focus of researchers on specific groups of animals, while others remain neglected. Training new potential Brazilian researchers interested in Zoology is essential for a greater understanding of this diversity, as well as exposing those potential new researchers to new groups and different work possibilities. Thus, the Summer Course in Zoology (in Portuguese, CVZoo) promoted by the Graduate Program in Zoology at the University of São Paulo, over the last ten years, has been seeking to contribute to this training of new researchers in the field of Zoology, as well as in updating teachers through university extension activities. In order to assess the impacts caused by CVZoo on the academic and professional training of the participants, Google forms were sent to participants in the ten editions of the course, as well as compiled information available on the Lattes Platform. Qualitative and quantitative analyses showed the profile of graduates, their expectations, and perceptions about the course. Based on these data, we demonstrate the CVZoo’s efficiency in popularizing Zoology throughout the country in contributing to the decentralization of knowledge as well as in meeting the urgent concerns of making access to knowledge more egalitarian and socially fair. Introduction 1. Research and teaching in Zoology in Brazil Brazil is a megadiverse country that concentrates in its territory a unique diversity of several animal groups (Mittermeier et al. 1997).Lewinsohn & Prado (2002) estimated that there are between 170 and 210 thousand known species in our country, a number that has been increasing significantly in the last twenty years.However, there is still a long way to go, since estimates suggest the existence of a number seven times greater than the currently described species (Lewinsohn & Prado 2005).In addition to the species that remain without proposed names, an extensive body of knowledge still awaits to be revealed. Given the potential load of knowledge that this diversity represents, Zoology emerges as an area of knowledge with the purpose of cataloging and understanding both current and extinct animal diversity.The area can be subdivided into several subareas, one of which is Systematic Zoology, whose objectives are to understand the evolutionary history of species and propose hypotheses to name and classify them.However, although more than 500 Brazilian researchers call themselves "systematists" and "taxonomists", they are unevenly distributed, mostly concentrated in the Southeast (about 50%) and South (20%) regions, with emphasis on the states of São Paulo, Rio de Janeiro, Paraná and Rio Grande do Sul (Marques & Lamas 2006).This is quite inconsistent with the diversity of biomes and specialized fauna found in each of the country's regions, and the potential for discovering new species in each of them. Similar geographic patterns are observed in scientific production in Zoology, with the Southeastern holding the highest part of productivity (70% of papers and 75% of citations) and in graduate programs in the area, in which the South and Southeast regions concentrate most of them (approximately 70% for masters and PhDs; data extracted from Marques & Lamas 2006).However, if we look at federal investment in university projects, we find that the South and Southeast regions once again hold most of the research funds, which include the provision of scholarships for students, and result in greater adherence and academic productivity (Marques & Lamas 2006). Faced with this unequal scenario of Zoology development in Brazil, the creation and execution of actions that equalize knowledge, teaching and scientific productivity across the country are urgently needed.Among the actions proposed by Marques & Lamas (2006), there are suggestions aimed at training new professionals in different regions of the country, increasing scientific production and disseminating knowledge to different audiences.The offering of specialization courses in meetings and scientific events are also mentioned (Marques & Lamas 2006), but extension projects with the participation of the university community were not considered as one of the possible agents for the expansion and decentralization of Zoology in Brazil. The Summer Course in Zoology (in Portuguese, CVZoo), created and organized by students of the Graduate Program in Zoology at the University of São Paulo, stands out as an important milestone for university extension in Zoology in Brazil.Below is a brief history of the course. History of CVZoo The Summer Course in Zoology began in January 2013, organized by students from the Graduate Program in Zoology (PPGZOO) at the University of São Paulo and supervised by Prof. Dr. Alessandra Bizerra. Initially, the course had the following objectives: 1) to disseminate the research lines in Zoology developed by students of the graduate program and 2) to provide teaching practice experiences and thus fill a gap in the professional training of such students (Soares et al. 2020). The course lasts for two weeks, the first one dedicated to classes on general topics in Zoology, such as Systematics, Philosophy of Science, Animal Behavior and Biogeography, and the second one containing activities with more specific subjects.Despite the fact that the first week of the course has changed little over time (with the exception of the remote editions that occurred in 2021 and 2022), the second week has changed considerably.In the first four editions, participants were divided into three groups, considering their research groups -Vertebrates, Panarthropoda and Non-Panarthropoda -and the activities were carried out jointly.Since the fifth edition, such a division into three groups no longer occurred and students began to assemble their own grid, choosing from several options of workshops and short courses on taxonomic groups (e.g., Annelida, Arthropoda and Chondrichthyes) and research and teaching methods. As of the third edition, the selection process for participation in the course began to consider the proportion of enrollments coming from the five regions of Brazil (Midwest, Northeast, North, Southeast and South), thus seeking to expand knowledge to more people.From the fifth edition onwards, teachers became part of the course's target audience, participating in workshops in the second week and developing a research project or didactic sequence.Thus, updating knowledge in Zoology for teachers was included as one of the objectives of the course.More detailed information about the participant selection process can be found in the work of Soares et al. (2020). Since the first edition of the course, members of the organizing committee have sought various ways to raise funds and thus partially or fully defray the cost of accommodation at the Sports Practices Center of the University of São Paulo (in Portuguese, CEPE-USP) and meals at the university restaurant.In this way, the principal aim is to contribute to reducing expenses and facilitating access for students from more distant regions and in less favorable socioeconomic conditions. As an evaluation criterion, course participants are invited to develop over the two weeks a research project in the format of a master's degree, on a topic within Zoology under the supervision of members of the course organizing committee.On the last day of the course, the projects are presented and evaluated by an examining board, composed of members of the organizing committee not involved in the development of the projects.The participation and frequency of the participants are also considered as an evaluation criterion and make up the final grade. In ten editions, 460 students from different regions of Brazil and other Latin American countries (e.g., Peru, Colombia) were selected to participate in the course (Table 1), among more than 4,500 enrollments.Over time, adjustments in the number of vacancies were necessary to meet the growing demand for registrations.The offer of vacancies doubled between the first and tenth editions, going from 30 vacancies in 2013 to 60 in 2022, with numbers of people registered above 400 in all editions from the fourth. Given the already exposed need to provide access to Zoology teaching and equalize the generation of knowledge throughout the country, and considering the ten years of application of an extension course with concerns beyond content, this study had the following objectives: 1) raise and evaluate the profile of the certified participants who helped build CVZoo over ten years, 2) investigate their motivations, expectations and evaluations, 3) evaluate the impacts of the course on the academic and professional training of the certified participants. Material and Methods To profile the course concluding participants, data on the academic background of them (degrees obtained, universities, region and animal phylum studied) were obtained through the Curriculum Lattes Platform.Only participants who passed the course and received certification were considered. Two questionnaires were developed, one for participants selected as undergraduate students and the other for participants selected as teachers.Both contained multiple-choice and essay questions and were divided into three parts (Appendix 1).Only the first part had the same content in both questionnaires, being focused on the profile and self-identification of the graduates of the course (e.g., nationality, race, sexual orientation and gender identity) as well as on the research area and current institution.These data allowed us to obtain additional information regarding the profiles of participants.In the second part of the questionnaire addressed to the students, the questions dealt with motivations and expectations related to CVZoo and impressions about workshops and the process of developing a research project.In the second part of the teachers' questionnaire, motivations and expectations were also questioned, as well as the relationship between the topics covered and the school environment.In the third part, students were asked about the influence of CVZoo on academic life (research and extension) while teachers answered questions about teaching and prospects for pursuing an academic career.In order to understand how the target audience has been informed about CVZoo activities and editions, the third part of the questionnaire also included, for both categories, a question about the method by which the participant became aware of the course, involving all means of dissemination incorporated throughout editions (social networks, website, email list and through undergraduate colleagues). The questions were arranged in Google Forms and sent to students and teachers who concluded in CVZoo on two different occasions.The first research round took place from February 20th to April 19th, 2018 (contemplating certified participants from the first six editions) and the second, from March 4th to June 4th, 2022 (contemplating certified participants from all course editions).For certified participants who answered the forms on both occasions, only the second answer was considered, as it was the most recent, thus excluding the possibility of double entries for the same participant in the quantitative analyses; in the qualitative analyses, both responses were considered.The total (n) of responses for each question on the form was treated independently, so that questions left unanswered by any respondent did not interfere with the calculations for other questions.The publication of the data provided here was authorized by the respondents. The data obtained through the Curriculum Lattes Platform were compiled in a spreadsheet and standardized (Appendix 2).We categorized the information about the studied phyla following the names of the phyla, when dealing with specialized studies (for example Annelida, Arthropoda and Chordata), and when dealing with less specific studies or involving more than one phylum, we used other three categories: Fauna (for studies with more than one phylum, or communities such as meiofauna or zooplankton); Protists (for studies with unicellular eukaryotes such as foraminifera); and Others (for studies on other topics, not related to metazoans).Similarly, due to the diversity of graduate programs and the number of graduates in each program, we chose to categorize this information by related areas, thus obtaining the following categories of graduate programs: Animal Biology, Biodiversity and Conservation, Biology, Ecology, Teaching, Entomology, Oceanography, Systematics, Zoology and Others (including areas less related to Zoology, such as Botany, Bioinformatics and Genomics, Biochemistry, Ethnobiology, Geology, Museology, among others). Frequencies of each category and their changing patterns over the ten CVZoo editions were analyzed and described.To compare whether there was a difference in each category (race, gender identity and sexual orientation) over the years, we applied a chi-square test, considering a significance level of 0.05.The answers to the discursive questions were analyzed using content analysis procedures as parameters (Bardin 1977). Results We were able to locate Curriculum Lattes data from 371 concluding participants of the course and of these 193 responded to the Google forms. Profile of participants According to data collected from the Lattes Platform, most CVZoo participants came from the Southeast and Northeast regions of the country.While nearly 50% graduated from universities in the Southeast and more than 20% from universities in the Northeast, less than 10% came from each of the other regions of Brazil (Figure 1a). A similar pattern is observed when we analyze the regions where graduates have completed master's degrees because the Southeastern and Northeastern together account for more than 65% of graduates who attended a master's degree (Figure 1b).However, this pattern changes significantly for the Doctorate course, given that most students (>65%) who continued their studies at the Doctorate level attend or have attended universities in Southeast Brazil (Figure 1c).In the master's degree, some universities concentrate a higher percentage of students.In doctorates, this concentration is even more drastic, with only three universities (UFRJ, UNICAMP and USP) concentrating more than 40% of graduates who are studying or have finished PhD.Altogether, more than 45% (n = 170) of CVZoo concluding participants continued their studies at least at the Master's level, 10% (n = 38) continued at the Doctorate level and two participants at the post-Doctorate level.Over 30% of CVZoo concluding participants have attended in programs focused on areas related to Zoology (Figure 2) at both levels (master's and doctorate), most of them at the University of São Paulo (USP).Other USP programs also received graduates from CVZoo, such as the graduate programs in Systematics, Animal Taxonomy and Biodiversity (STAB), Ecology, Entomology, Biological Oceanography and Science Teaching.Such results demonstrate that the course has played a decisive role in attracting new students to postgraduate courses at USP.Another interesting fact about the destination of graduates from the course is the diversity of insertion areas.In addition to the postgraduate programs totally focused on the study of animals, such as the Animal Biology, Entomology and Zoology programs, we also observed many graduates, with "zoological" lines of research, but inserted in other programs, such as Ecology and Biodiversity and Conservation.The programs farthest from zoology, such as Genetics and Evolution, Biosystems, Tropical Diseases or Biotechnology, for example, were all compiled in the category "Others".When analyzing the focal phylum, in all CVZoo editions and throughout the various training levels, more than 60% of participants were interested in Chordata or Arthropoda, while only a tiny percentage of graduates dealt with the study of other animal phyla (Figure 3). According to the answers obtained through the forms on students' race (n = 153), the majority declared themselves as white (62.1%), followed by brown (24.8%), black (11.1%) and other races (2%).These proportions differ substantially from the Brazilian population, of which 54% of the Brazilian population declares itself to be black, including a broad spectrum of skin colors (IBGE 2019).We also note that proportions vary over the editions (Figure 4a).Among teachers (n = 11), most self-declared as brown (63.6%) (Figure 4b).Regarding the gender identity of students (n = 150), the proportion of cisgender men (49.3%) and women (50.7%) varied little over the ten years of CVZoo (Figure 4c) but the representation of transgender students is still low (n = 3).Among teachers (n = 11) women were more numerous (72.7%) than men (27 .3%)(Figure 4d).On the other hand, a great diversity is observed regarding the sexual orientation of those certified participants (n = 153).Heterosexual students make up the majority of those participants (55.5%), followed by bisexuals (22.2%), homosexuals (20.3%) and asexuals (1.3%);only one student did not want to inform his sexual orientation (Figure 4e).Likewise, diversity differs significantly between teachers (n = 12), with 75% declaring themselves to be heterosexual, 16.7% bisexual and 8.3% homosexual (Figure 4f ).None of the respondents declared having any disability. Publicizing the CVZoo The most effective ways of publicizing the course have included announcements on social media (49.5%) and referrals to friends and fellow graduates (47.4%).Disclosure through social media has changed over the years.Following the progress and adherence to different forms of virtual communication, especially by the targeted audience, social networks such as Facebook and the course website itself have been more effective in the past (Soares et al. 2018), while Instagram has been the network responsible for greater adherence of subscribers in the last three editions (2020, 2021 and 2022).This highlights the importance of considering the advent of new social communication tools and understanding what content is consumed by users (Soares et al. 2018), as they can increase the reach of the course in future editions.Additionally, many enrollees came to know about CVZoo through the indication of former participants, which provides indications of the satisfaction of these graduates regarding the quality of the course offered throughout all editions, since the indications have remained stable over these ten years. General impressions about the course The course was well rated by the participants (students: n = 192; teachers: n = 19) since 73.5% of the students and 63.2% of the teachers stated that their initial expectations were exceeded, 23.4% of the students and 36.8% of the teachers felt that the initial expectations were met, while for 3.1% of the students the initial expectations were only partially met. The positive points most mentioned by the respondents were the contact with people from different regions of Brazil (25.3%), the content offered and the quality of the course programming (23.9%), the motivation to enter graduate programs and pursue an academic career (22.5%), and the approximation with students and professors of the graduate programs in Zoology and in Systematics, Animal Taxonomy, and Biodiversity and the lines of research developed at USP (21.1%).These points are in accordance with the extension guidelines established for these activities (FORPROEX 2012). Regarding the dissemination of research carried out at the university, visits to the IB-USP laboratories and MZUSP collections were cited by 12.8% of the respondents, and their importance is represented in the speech of a participant who claims to have known approaches within the area, which she intends to use in the future when she enters a graduate program.The activity of preparing a research project (5.6%) was pointed out by some respondents as a very positive and challenging point of the course.Even so, it contributed to the development of critical thinking about the work of colleagues and articles already published, in addition to the development of scientific thinking of participants involved in the process.One of the participants commented that she was "fearful" because of the project, "I thought I wouldn't be able to develop it, but I found out that it is an essential part of the entire course process."This denotes how part of teaching still remains dissociated from research and how the gradual insertion of undergraduates in Scientific Initiation (CI) activities can facilitate their understanding of the production of knowledge through the scientific practices (Massi & Queiroz 2010). Some respondents praised the classes and teaching strategies employed (15.5%) and the dedication of the organizing committee (11.2%).The cost of accommodation and food was cited by 4.2% of the participants, and one of them reported that "the possibility of staying in accommodation and food in the university restaurant were decisive points, because at that time he lived in a state very far from Sao Paulo".This shows that efforts to popularize university extension need to be linked to offering equal conditions for access by all.However, 8.4% of respondents highlighted the need for improvements in CEPE-USP housing facilities. For some, the environment provided by the course was quite enriching (10%) due to the exchange of experiences and their influence on their academic training.According to one of the respondents, the course would have been a great "watershed".The marked influence of the course is present in the response of another participant, in the following passage: "whenever I teach a short course, I remember how I experienced CVZoo, the enchantment that students have with us, they were the same as I experienced when I was a CVZoo student". Among the negative aspects of the course, many responses included the appeal for a longer duration of the course (12.6%), suggesting at least one additional week.The remote offering of the last two editions (IX and X), due to the COVID-19 pandemic, was mentioned as a negative point by the participants (5.6%) due to the desire that the activities could have taken place in person.Even so, one of the respondents' comments: "the online format had a positive side because I was able to participate even though I lived far away, but the negative side was the difficulty in concentrating and the tiredness I felt from sitting all day in front of the computer". Permanence in the academic environment and impacts on research Among students, 91.1% stated that CVZoo influenced their permanence or progress in the academic environment (n = 191), and 73.7% confirmed that the knowledge acquired during the two weeks of the course was applied in some way in their research projects developed later (n = 133).Among the acquired knowledge most cited by respondents are procedures and techniques (e.g., statistical analysis, electron microscopy, ecological niche models) and theoretical content such as those related to molecular biology, geometric morphometry, taxonomy, systematics, and scientific writing.For one respondent, the course was important for "creating the habit of studying Philosophy and understanding my research in the Epistemological sense and developing Integrative Taxonomy". About half of the students (48.4%) highlighted that the project development experience helped in the elaboration of future projects, and 32.3% of the students applied the proposal or part of it later, in activities of CI or even in the selection for graduate studies (i.e., master's and doctorate).One student stated that during the project's elaboration he was introduced to a methodology that he did not yet know, scanning electron microscopy (SEM), and that he later used it in his own master's research project.About 52% of the students claimed that they had not executed the project due to lack of opportunity, change of area or because they had not taken this specific project forward. For 24 students, the positive results went beyond the practical application, with 17 highlighting the networking developed with members of the course committee and professors at USP and the possibility of getting to know the scientific routine more deeply.Two students mentioned that their advisors at CVZoo were part of their TCC evaluation panel and two others mentioned that their advisors at CVZoo are currently helping with their research projects in graduate school.One of the respondents stated that the course directly influenced the choice of his master's degree and the continuity of his academic career. Nine students stated that the presentation to an evaluation panel, made up of CVZoo organizers, was an important preparatory experience for similar situations in the future.Terms such as "challenging", "dynamic", "instigating", "enriching", and "profitable" were used to describe the project development experience, demonstrating the good reception of the activity by the participants.Only 5 students claimed that they had not developed a project or did not remember carrying it out.Seven negative responses were observed regarding the development of the research project during the course and among these, two students claimed not to have had a specialist advisor in their animal group or research field. Among the contents offered in the form of workshops and with the possibility of choice by the participants, those that stood out the most were: Systematics, Taxonomy, workshops of specific taxonomic groups, techniques (Software, MicroCT, Molecular Biology), and scientific writing/methodology.The reasons given by the respondents were learning useful tools, up-to-date information on poorly studied taxonomic groups, discovering new topics of interest, and teaching practices by the lecturers. Impacts on teaching practice and university extension For teachers, 72.2% stated that they had incorporated the knowledge obtained in the course into their teaching practice (n = 12), and 41.7% had implemented the didactic sequences presented at the end of the course (n = 5), which are equivalent to the project developed by undergraduate students.Three teachers preferred to take advantage of the CVZoo opportunity to develop research projects instead of teaching sequences.Some teachers highlighted the importance of acquiring and updating knowledge in Zoology during the course to improve their classes.One teacher pointed out: "I already did practical classes using collected animals, seeing them with such diverse specimens inspired me to elaborate the classes with greater care.The postgraduate course in management and conservation of wild fauna that I had taken the previous year gained even more meaning."One teacher also commented that she discussed aspects of the research routine, such as collection and animal preservation techniques, with her students.It should be noted that specimens preserved in alcohol were used during practical classes, allowing not only contact with different groups of animals but also a reflection on their use in school spaces. According to one of the respondents, classes on Biogeography and evolutionary processes were a watershed in her pedagogical practice, giving her greater confidence and autonomy to teach classes on these subjects.Another teacher highlighted that the way in which the contents were addressed in the course encouraged her to explore more teaching possibilities, such as working with drawings, collecting materials in the environment, visiting institutions, using and building objects (e.g., magnifying glasses, microscopes), use of media (e.g., podcast) and games.One teacher commented that she passed on the knowledge acquired during the course to colleagues in the Science area who did not participate in CVZoo, thus expanding the scope of the course and the knowledge that is worked on. Regarding the engagement towards extension actions, 77.6% (n = 149) of respondents stated that CVZoo would have motivated their participation in other courses and subsequent extension activities.The awakening to university extension can be exemplified by the phrase of one of the respondents about the main motivation for continuing to carry out extension actions: "to perhaps generate the same impact that the course had on me".Among those who answered 'no' to the question (23.4%, n = 44), one of them commented that he already participated in extension activities before the course.Another respondent commented that "if there are more extension activities that show the population, especially young people, the importance of different types of knowledge, from there it is possible to create a new culture, in which the community supports and benefits from the work carried out in the universities".This in fact prevents teaching and research from becoming alienating practices when removed from society, or when exempt from reflections on the knowledge produced within academic walls, but which must be transmitted and discussed with communities (Santos et al. 2016). The extension activities most cited by the respondents as those of interest and/or already carried out by them were: environmental education actions such as building vegetable gardens, carrying out trails and exchanging knowledge with traditional communities and in schools (23.7%), scientific dissemination by research groups, conservation projects and science museums (20%), and organization and monitoring of events (17%).In addition to CVZoo, respondents reported having participated in other university extension courses (20%) and also mentioned workshops and isolated lectures at their universities or nearby institutions (11.1%).One of the respondents reports that after participating in CVZoo, he began "looking for more extension courses from universities around the world, almost as if he had discovered a new way of interacting with people from other areas". CVZoo and the Brazilian scenario Extension practices are strategic spaces for the implementation of interdisciplinary activities that promote greater contact between the subjects involved, with knowledge of reality being fundamental for the application of efficient methods that allow social transformation.Among the existing actions for the popularization and development of Zoology in Brazil, we present here the experience of the Summer Course in Zoology at USP, which over the course of 10 years has contributed to the training of students and teachers from different regions of the country. The high proportion of course participants from Southeast and Northeast regions observed here is expected if we consider that these regions concentrate the largest portion of the population (42.1% and 27.8% respectively) (Artes & Unbehaum 2021) and the course has been held in the state of São Paulo.Almost half of concluding participants have continued their studies at Masters or Doctorate levels, a high proportion when compared to the national scenario, which can be justified by the bias of the selection process of course participants, which prioritizes candidates with greater interest in the academic career. The great interest in Systematics and Taxonomy demonstrated by the participants is quite positive given the urgent need to awaken and train new professionals engaged in the description of biodiversity, including that of lesser-known groups (Marques & Lamas 2006).In addition, due to CVZoo's national coverage, we have increased the incentive to enter this sub-area of Zoology for students from all regions.Thinking of USP as a national reference in both research and teaching (EGIDA 2022), we feel that it is our responsibility to offer, in an extension format, the knowledge of techniques and tools that can be applied by young researchers from other universities spread around the country. More than a third of Brazilian systematists are dedicated to the taxonomy of fish, mollusks, crustaceans and insects (Diptera, Hymenoptera and Coleoptera).Despite such groups being quite numerous in terms of species, other taxa of extremely rich invertebrates within Arthropoda, or even taxa beyond, such as Nematoda, lack specialists who can dedicate themselves to making their diversity known (Marques & Lamas 2006).CVZoo has actively participated in the popularization of zoological groups that are not numerically diverse (e.g., lophophorates and interstitial pseudocoelomates), and in encouraging research into these relatively understudied groups by including in its thematic grid workshops aimed at presenting the diversity and evolution of groups that are worked on by committee members (e.g., workshops about mammals, flatworms and annelids).In this way, we draw attention to these groups and indirectly fill possible gaps in the academic training of participants from universities without specialists in certain groups.Even groups that are not directly worked on by committee members are often addressed in classes on broad topics (e.g., Metazoa).The University of São Paulo has a privileged didactic collection of zoological material, including specimens of rare groups of non-panarthropod invertebrates that would hardly be seen in another university environment, which is why the promotion of activities involving these animals increases the notion of biological diversity by the course participants. University extension as a path for social transformation More than half of CVZoo concluding participants have declared themselves as white, which does not reflect the existing racial scenario in the country, in which self-declared brown and black people make up 56.2% of the Brazilian population (IBGE 2019).Considering the Southeast and Northeast regions alone as the most representative of the students enrolled in the course, we have brown and black people constituting 48.9% of the population in the Southeast, and 74.4% in the Northeast (IBGE 2019).However, when we visualize the national scenario of higher education, we see proportions corresponding to those obtained here, including promising estimates of the decrease in the difference between white and black students over the years.In 1993, black people constituted only 18.2% of the student class, while in 2011 they already represented 37.2% of the total number of students (Picanço 2016).Such an increase can be understood as a reflection of the enactment of the Law of Quotas for Higher Education n° 12.711/2012, in which several universities began to adopt racial quotas and quotas for public school students, thus expanding access for brown, black, and low-income people.However, even though inequality is gradually being reduced, the disadvantages of blacks and browns persist in terms of educational opportunities experienced, a scenario that begins in high school and continues until higher education (Barreto 2015).This highlights the need for affirmative policies that make the access of different ethnic groups to education more equal, including outreach activities. Women represent the majority of enrollments in higher education (57%), both nationally and in all regions of the country (Barreto 2014).This is a recent situation that began to emerge in the 2000s, but which still cannot be understood as representing equal opportunities for men and women in professional insertion (Barreto 2014).If different graduate courses are analyzed, women make up the majority of those with "lesser prestige" and related to "caring" functions, such as in education and health, while men constitute the majority in exact sciences and technology courses (Artes & Unbehaum 2021), which is exemplified by the higher proportion of female teachers enrolled in CVZoo. The low representation of transgender participants reflects the national scenario of invisibility and exclusion of transgender people from citizenship, with only 0.02% of the trans population reaching higher education in Brazil, as pointed out by Benevides & Nogueira (2019).Unfortunately, it was not possible to make any comparison with the national scenario in regards to the sexual orientation of people living in Brazil, since data are scarce and were not included in the latest IBGE censuses. In the interactions promoted between students from different backgrounds and between them and graduate programs, the dialogical interaction between subjects and content was guided by the inseparability between teaching-research-extension.As for the motivation to continue an academic career and enter in a graduate program, CVZoo allows interdisciplinarity between different areas of knowledge, considering the diversity and heterogeneity of existing undergraduate courses in the country and culminating in the expectation of impact on student training, with consequent impact on social transformation (FORPROEX 2012). Offering the course in a remote format made it possible not only for students and teachers from different Brazilian regions to access it, but also for people whose financial condition would not allow for face-toface participation.Therefore, in order to expand the scope of the course, it is essential to rethink its format in future editions, considering the possibility of carrying out face-to-face and remote activities together, since simply paying for accommodation and food for participants is not enough to ensure access for everyone. The data presented here denote the scope of the extension carried out, characterized as an intervention in social reality through the complementation of the academic training of teachers, sometimes quite relegated as secondary importance (Assis & Bonifácio 2011). As discussed by Alarcão (2011), reflection on teaching practice allows students and teachers to exercise their creativity and not only act by reproducing ideas and practices in the same way they were presented.The use of varied didactic strategies by CVZoo lecturers has contributed to reach individuals with different teaching-learning characteristics, in addition to providing the construction of knowledge and production of meanings by its participants. Final considerations and future perspectives for CVZoo Since its inception, the course has prioritized the transdisciplinary approach of zoological groups, the participation of diverse people and the use of varied teaching approaches, based on these precepts that, at each edition, the opportunity to adapt to the academic scenario is recognized and so the social context of the target audience.Thus, affirmative actions, such as the implementation of a quota system for socially vulnerable people (ethnic groups and people with disabilities) are already being implemented in the process of selecting candidates for the eleventh edition.The idea is that the course becomes an increasingly tangible opportunity for people from socially marginalized groups, who are constantly denied access to academic spaces, thus contributing to the reduction of the disparity observed in the representativeness of certain groups in the scenario of Brazilian Zoology. Another important aspect resides in the need for constant updating of the forms of interaction and dissemination of the course to and with the general public since social networks are always in motion.The effects of the COVID-19 pandemic are known to have reduced the academic productivity of scientists, but the understanding of the impacts on the training and profile of undergraduate students who changed their routine to distance learning is still unknown. The diversity of workshops offered throughout the course, addressing taxonomic groups or macroecological and macroevolutionary aspects, still needs to be expanded, as well as the interdisciplinary nature of CVZoo activities should be considered a goal, but without losing focus on Zoology and the protagonism of animals.In this way, we aim to make the course grow and renew itself, becoming an integral activity of the academic culture of the student body of USP's graduate programs and continuing to contribute to the production of human resources in Zoology in the generations to come. Figure 1 . Figure 1.Origin of participants of the 10 editions of the Summer Course in Zoology during the undergraduation period (a), masters (b), doctorate (c), and the current address (d).INT, International; MD, Midwest; N, North; NE, Northeast; SE, Southeast; S, South. Figure 2 . Figure 2. List of focal phyla at different levels of training of course egresses. Figure 3 . Figure 3. Classes of graduate programs in which CVZoo graduates entered."Biodiv & Conserv" includes programs focused on Biodiversity and Conservation; "STAB" corresponds to the graduate program in Systematics, Animal Taxonomy and Biodiversity, at the Zoology Museum of USP. Figure 4 . Figure 4. Profile of students (left panel) and teachers (right panel) enrolled over the 10 editions of the Summer Course in Zoology, regarding race (a, b), gender identity (c, d) and sexual orientation (e, f ). Table 1 . Number of enrollments and participants selected by course edition.
v3-fos-license
2016-05-04T20:20:58.661Z
2015-03-19T00:00:00.000
17495584
{ "extfieldsofstudy": [ "Materials Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep09277.pdf", "pdf_hash": "b57e63121030737ca6da4ef356029a52625cc6ab", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:68", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "b57e63121030737ca6da4ef356029a52625cc6ab", "year": 2015 }
pes2o/s2orc
Facile synthesis of three-dimensional structured carbon fiber-NiCo2O4-Ni(OH)2 high-performance electrode for pseudocapacitors Two-dimensional textured carbon fiber is an excellent electrode material and/or supporting substrate for active materials in fuel cells, batteries, and pseudocapacitors owing to its large surface area, high porosity, ultra-lightness, good electric conductivity, and excellent chemical stability in various liquid electrolytes. And Nickel hydroxide is one of the most promising active materials that have been studied in practical pseudocapacitor applications. Here we report a high-capacitance, flexible and ultra-light composite electrode that combines the advantages of these two materials for pseudocapacitor applications. Electrochemical measurements demonstrate that the 3D hybrid nanostructured carbon fiber–NiCo2O4–Ni(OH)2 composite electrode shows high capacitance, excellent rate capability. To the best of our knowledge, the electrode developed in this work possesses the highest areal capacitance of 6.04 F cm−2 at the current density of 5 mA cm−2 among those employing carbon fiber as the conductor. It still remains 64.23% at 40 mA cm−2. As for the cycling stability, the initial specific capacitance decreases only from 4.56 F cm−2 to 3.35 F cm−2 after 1000 cycles under a current density of 30 mA cm−2. T he ever increasing demand for portable electronic equipment, and electric or hybrid electric vehicles, as well as the concerns of environmental pollution have stimulated the rapid development of the electro-chemical energy storage and conversion devices (EESCs). These devices typically include lithium-ion batteries and pseudo-capacitors [1][2][3][4] . In particular, pseudo-capacitors, as a typical representative of EESCs, have been widely studied due to their numerous merits, such as fast charge and discharge properties, long service life and high power density, wide range working temperature, high safety and the environmentally friendly feature 5,6 . In the development of pseudo-capacitors, electrode materials are well known as one of the key factors affecting their electro-chemical performance 7,8 . So far, most of the investigations have focused on utilizing metals (e.g., Ni foil, Ni foam, Ti foil, Cu foil, etc.) as the current collector and transition metal oxides (e.g., NiO, NiOH, CoO, Co 4 O 3 , NiCo 2 O 4 , MnO 2 ) as active materials to fabricate high-performance electrodes of pseudocapacitors [9][10][11][12][13][14][15] . These electrode materials were found to exhibit impressive pseudo-capacitive performance. Among all those materials, nickel hydroxide is a promising electrode material for pseudocapacitor applications due to its high theoretical capacitance (2082 F g 21 for Ni(OH) 2 within 0.5 V) and good electrochemical reversibility in alkaline electrolyte, low costs, abundant natural resources, and excellent environment compatibility [16][17][18][19] . However, how to obtain Ni(OH) 2 active material with excellent electrochemical performance and how to fabricate a high energy density and high power density electrode containing Ni(OH) 2 remain big challenges 20 nanorods and ultrathin nanosheets electrode materials based on carbon nanofibers, the electrode shows excellent electrochemical performance due to its fine 3D hybrid nanostructure 23 . Recently, Nickel Cobalt Sulfide directly deposited on carbon fiber also shows high specific capacitance of 1418 F g 21 at 5 A g 21 with excellent rate capability 24 . Although these studies have very excellent achievements, it is still necessary and of great significance to unceasingly improve the electrode performances of pseudo-capacitors to meet the ever increasing social needs. In this work, we designed and fabricated a unique 3D hybrid nanostructured electrode using the commercial two dimensional (2D) textured carbon fiber (CF) as the conductive substrate due to its good conductivity, ultra-lightness, excellent chemical stability and outstanding flexibility [25][26][27] . Then, needle-like NiCo 2 O 4 was assembled uniformly and vertically on the surface of carbon fibers to form 3D hybrid structured CF-NiCo 2 O 4 materials, the latter of which was found to have a better electrical conductivity compared to the binary metal oxides such as NiO and Co 3 O 4 as reported in the previous studies 9, 21,28 . Within the composite, the 3D hybrid structured CF-NiCo 2 O 4 serves as a supporting material, while polycrystalline Ni(OH) 2 was deposited on the surface of the NiCo 2 O 4 needles by using an electrochemical deposition technology. In this way, a new ultra-light, flexible and porous 3D hybrid structured CF-NiCo 2 O 4 -Ni(OH) 2 electrode was synthesized. The preparation process is shown in Fig. 1. Electrochemical measurements demonstrate that the 3D hybrid nanostructured CF-NiCo 2 O 4 -Ni(OH) 2 electrode shows a high capacitance with excellent rate capability and good cycling stability. Results The hydrothermally synthesized supporting material was firstly characterized by X-Ray Diffraction (XRD) (see Fig. 2a). All the diffraction peaks in the XRD pattern can be readily indexed as spinel NiCo 2 O 4 , according to the standard card (JCPDS Card No. 20-0781). The morphology of the as-prepared product was studied by scanning electron microscopy (SEM). The typical low-magnification SEM micrograph ( Fig. 2b) clearly illustrates that the 3D hybrid nanostructured supporting material is composed of numerous needle-like NiCo 2 O 4 with a sharp tip uniformly arranged on the CF surface. High-magnification SEM micrograph (Fig. 2c) reveals that the length of a NiCo 2 O 4 needle is approximately 4 mm, and the diameter of the nano-needle ranges from being several nanometers at the tip of the needle to around 100 nm near the end of the needle. Bright-field transmission electron microscopy (TEM) images show that the samples are polycrystalline (Fig. 2d). High-resolution TEM micrographs (Fig. 2f) show the lattice fringes with interplane spacing of 2.34 nm, corresponding to the (222) plane of the spinel phase NiCo 2 O 4 in agreement with our XRD results. Then, selected-area electron diffraction (SAED) measurements were carried out, and the results show that the NiCo 2 O 4 needles have a polycrystalline structure, which is consistent with the observation in Fig. 2e. Therefore, we have illustrated that the well-defined 3D hybrid nanostructured CF-NiCo 2 O 4 supporting materials can be obtained by using a facile, lowcost, effective solution route. The polycrystalline Ni(OH) 2 wrinkled nanosheets were subsequently deposited on the surface of the NiCo 2 O 4 needles using a simple electrochemical deposition method. The total mass of the CF-NiCo 2 O 4 -Ni(OH) 2 electrode is 17.29 mg, and that of the loaded Ni(OH) 2 active material is 2.44 mg cm 22 . The CF-NiCo 2 O 4 -Ni(OH) 2 electrode was then analyzed using Raman spectroscopy. As seen in Fig. 3a, the Raman peaks at 185, 467, 508, and 651 cm 21 , corresponding to F 2g , E g , F 2g , and A 1g modes of the NiCo 2 O 4 nanowires, respectively 21,29 . Two new peaks are observed at 459 cm 21 and 534 cm 21 after electro-deposition, corresponding to the symmetric Ni-OH stretching mode and to the Ni-O stretching/ vibrational mode, respectively [30][31][32] . In Fig. 3b and c, Ni(OH) 2 was deposited uniformly on the surface and in the interspace of the 3D nanostructured supporting material. TEM micrographs show that the synthesized Ni(OH) 2 is of a 3D wrinkled nanostructure consisting of densely wrinkled nanosheets (Fig. 3d, e). All the surface of the NiCo 2 O 4 needles have been completely coated, indicating the formation of a highly porous 3D hybrid nanostructure. The corresponding SAED results (the inset of Fig. 3d) exhibit a diffraction ring pattern, indicating that the 3D wrinkled Ni(OH) 2 is of a polycrystalline structure. The different diffraction rings can be readily indexed to the different crystal planes of the Ni(OH) 2 phase. Fig. 3e illustrates that Ni(OH) 2 is firmly connected to the surface of NiCo 2 O 4 needles, which eventually forms a 3D wrinkled and porous nanostructure, consistent with the aforementioned SEM images analysis. The unique 3D hybrid porous nanostructure implies good electrochemical capacitive performance of the sample. Then, the unique 3D hybrid porous structured CF-NiCo 2 O 4 -Ni(OH) 2 product was used as the electrode material in three-electrode cell. Fig. 4a shows the typical cyclic voltammetry (CV) curves of the pseudocapacitor with the CF-NiCo 2 O 4 -Ni(OH) 2 electrode at different scanning rates in the potential range from 20.2 V to 0.55 V. The CV curves clearly demonstrate that the CF-NiCo 2 O 4 -Ni(OH) 2 electrode shows typical pseudocapacitive characteristics and excellent reversibility. In specific, a pair of representative redox peaks are clearly visible in each voltammogram at the scanning rate of 2, 4, 6, 8, and 10 mV, which is obviously distinct from the electric double-layer capacitance characterized by nearly rectangular CV curves. By comparing the CV curves with CF-NiCo 2 O 4 supporting materials in Fig. 4b and by comparing the discharging curves of the www.nature.com/scientificreports CF-NiCo 2 O 4 -Ni(OH) 2 electrode with CF-NiCo 2 O 4 supporting materials at the current density of 5 mA cm 22 (Fig. S1), we can see that the CF-NiCo 2 O 4 -Ni(OH) 2 electrode demonstrates better reversible and charging/discharging properties. The capacitive properties of the unique 3D hybrid porous structured CF-NiCo 2 O 4 -Ni(OH) 2 electrode mainly originate from the strong faradaic redox reaction of polycrystalline Ni(OH) 2 nanosheets. The capacitance of electrode was measured and could be correlated with the reversible reactions of Ni-O/Ni-O-OH by using the following equation 33 : In Fig. 4a, the peak current increases significantly at the scanning rate from 2 mV to 10 mV, which indicates that the kinetics of the interfacial faradic redox reactions are rapid enough, which facilitates the transmission of electronic and ionic species. With increasing scan rates, the potential of the oxidation peak shifts in the positive direction and that of the reduction peak shifts in the negative direction, which is mainly related to the internal resistance of the electrode. To further evaluate the application potential of 3D hybrid porous structured CF-NiCo 2 O 4 -Ni(OH) 2 electrode, galvanostatic charge-discharge measurements were carried out between 20.1 and 0.45 V (vs. SCE) at various current densities, and the corresponding discharge curves of electrode are shown in Fig. 4c. The shapes of five curves are very similar and show ideal pseudocapacitive behaviour with sharp responses and small internal resistance drop. Moreover, there is a potential platform in every discharge curve. It is the typical pseudo-capacitance characterization of transition metal compounds that is in agreement with the result obtained from CV curves in Fig. 4a. This phenomenon is caused by a charge transfer reaction or electrochemical absorption-desorption process at the electrode-electrolyte interface. According to the formula (2), the calculated areal capacitance is shown in Fig. 4d. Encouragingly, the capacitances are 6.04, 5.72, 5.24, 4.56, 3.88 F cm 22 (corresponding to 2475, 2344, 2147, 1869, 1590 F g 21 , respectively) when the dischare current densities are 5, 10, 20, 30 and 40 mA cm 22 , respectively. To the best of our knowledge, the areal capacitances obtained from the CF-NiCo 2 O 4 -Ni(OH) 2 electrode in this work are the highest values among those employing 2D textured carbon fiber as the conductor. The capacitance gradually decreases as the current density increases. The reason is that at a higher current density the incremental voltage drops and the active material involved in the redox reaction is insufficient. In comparison with the specific capacitance of 6.04 F cm 22 measured at the current density of 5 mA cm 22 , the specific capacitance decreases to be 3.88 F cm 22 when the current density increases to 40 mA cm 22 , but still remains 64.2% of the original value. These results illustrate that the CF-NiCo 2 O 4 -Ni(OH) 2 electrode possessed excellent rate capability. In addition, to investigate the stability of CF-NiCo 2 O 4 -Ni(OH) 2 electrode, the cycling endurance of the electrode was evaluated by repeatedly charging-discharging measurements at a constant current density of 30 mA cm 22 and the results are shown in Fig. 4e. The areal capacitance is around 4.56 F cm 22 in the first cycle and it gradually decreases to 3.35 F cm 22 after 1000 cycles, corresponding to a 26.6% decrease in the initial specific capacitance. Although the cycling stability is not good enough, the capacitance of the CF-NiCo 2 O 4 -Ni(OH) 2 electrode after 1000 cycles is still higher than the areal capacitance of the initial charging/discharging cycle reported in some recent studies, even at a heavy current density up to 30 mA cm 22 21,22,24 . Discussion The aforementioned electrochemical measurements demonstrate that the 3D hybrid nanostructured CF-NiCo 2 O 4 -Ni(OH) 2 electrode material shows excellent pseudocapacitive behavior, such as higher capacitance, better rate capability and cycling stability. Note that the areal capacitance of this hybrid electrode is as high as 6.04 F cm 22 at the current density of 5 mA cm 22 . The excellent electrochemical performance of the CF-NiCo 2 O 4 -Ni(OH) 2 electrode material may be attributed to its 3D hybrid structure of multiscale, ranging from nanoscale to microscale. First, the 3D hybrid structure of the welldefined CF-NiCo 2 O 4 supporting materials provide an efficient 3D electric-conductive network, which facilitates the high-speed electron transport and serves as the foundation for electro-deposited Ni(OH) 2 . Second, the 3D CF-NiCo 2 O 4 supporting material also has more active surface area for accommodating more active material Ni(OH) 2 . Third, the 3D wrinkled framework of Ni(OH) 2 nanosheets allows the facile penetration of electrolyte into the electrode, and promotes the surface redox reactions, which offers a relatively high electro-active surface area, as described in the schematic diagram in Fig. 1. Energy density and power density are two key factors for evaluating the power applications of pseudocapacitors. A good pseudocapacitors can provide high energy density and capacitance. The power density and energy density were estimated from discharge curves by the formulas (3) and (4). As shown in Fig. 4f, the energy density of the CF-NiCo 2 O 4 -Ni(OH) 2 electrode was able to reach 9.14 kWh m 22 at a power density of 14.83 W m 22 , and still remained at 5.86 kWh m 22 at a power density of 109.9 W m 22 . Compared with other 3D hybrid porous structures that use 2D textured carbon fiber as the current collector (e.g.: Nickel Cobalt Sulfide, Nickel-Cobalt Hydroxide Nanosheets,NiCo 2 S 4 Nanotube) and are applied as electrode of pseudocapacitors, the CF-NiCo 2 O 4 -Ni(OH) 2 electrode exhibited higher areal capacitance and energy density (See Table 1). Conclusions To sum up, the 3D hybrid nanostructured CF-NiCo 2 O 4 supporting materials were successfully synthesized by using a facile solution method and a simple annealing treatment. By using the 3D hybrid nanostructured CF-NiCo 2 O 4 materials as the supporting materials, we have fabricated a new electrode material for pseudocapacitors via electrochemical deposition -the 3D hybrid nanostructured CF-NiCo 2 O 4 -Ni(OH) 2 . The total mass of the composite electrode was found to be 17.29 mg only. Electrochemical measurements demonstrate that the pseudocapacitor with the synthesized CF-NiCo 2 O 4 -Ni(OH) 2 electrode exhibits high specific capacitance and excellent cycling stability. To the best of our knowledge, the CF-NiCo 2 O 4 -Ni(OH) 2 electrode in this work shows the highest areal capacitance of up to 6.04 F cm 22 at the current density of 5 mA?cm 22 , among those employing carbon fiber as the supporting substrate. The capacitance of the electrode still remains to be 3.35 F cm 22 at the current density of 30 mA?cm 22 after 1000 cycles. The excellent performance of the electrode material is attributed to the unique 3D hybrid micro/nano-structure of the CF-NiCo 2 O 4 -Ni(OH) 2 electrode. The performance we achieved suggests that the 3D hybrid nanostructured electrode prepared using simple synthesis process and developed in this work have great potential in various energy storage technologies. Therefore, we expect that pseudocapacitors with better electrochemcial performance could be fabricated by using the 3D hybrid nanostructured electrode. Methods All the reagents used in the experiments are of analytical grade and are used without further purification. The synthesis of the CF-NiCo 2 O 4 supporting materials. In a typical procedure, 1 mmol Ni(NO 3 ) 2 ?6H 2 O, 2 mmol Co(NO 3 ) 2 ?6H 2 O, and 5 mmol urea were dissolved in 35 ml. deionized (DI) water. After stirring for about 10 min, a transparent pink-colored solution was obtained. The solution was then sealed in a Teflon-lined stainless autoclave. A piece of clean carbon cloth substrate of 3 3 7 cm 2 in size was immersed into the reaction solution. Afterwards, the autoclave was put in an oven, and heated to 120uC and kept at that temperature for 5 h. After cooling down naturally to room temperature, the sample was collected and then washed with DI water and absolute ethyl alcohol for several times before it was dried in a vacuum oven at 80uC for 2 h. Finally, the dried sample was calcined at 350uC in the air for 2 h with the heating rate at 4uC min 21 . The fabrication of the 3D hybrid nanostructured CF-NiCo 2 O 4 -Ni(OH) 2 . The CF-NiCo 2 O 4 supporting materials were accurately weighed and cut into 1 3 1 cm 2 pieces, before being employed as the work electrode in the 100 ml pure and transparent 0.1 M Ni(NO 3 ) 2 solution for a standard three-electrode cell at room temperature (about 25uC), where the Pt foil serves as the counter electrode and a saturated calomel electrode (SCE) as the reference electrode. Electro-deposition was performed at the potential of 20.9 V for 400 s. Then, the obtained sample was washed with DI water and ethanol for several times. The product was then dried at 100uC for 10 h in a drying oven. The mass of the CF-NiCo 2 O 4 -Ni(OH) 2 and the pure Ni(OH) 2 was 17.29 mg cm 22 and 2.44 mg cm 22 , respectively, being weighed with a high precision analytical balance of Sartorius BT25S. Electrode materials characterizations and electrochemical measurements. X-ray diffraction (XRD) patterns were collected with a SHIMADZU XRD-7000 X-ray diffractometer in a highly intensive Cu Ka irradiation (Voltage: 40.0 Kv, Current: 30.0 mA, l: 1.5406 Å ). The morphology of the products was examined by fieldemission scanning electron microscopy (Hitachi, SU-8010). TEM and HRTEM images were recorded on a JEOL JEM-2100F microscope that operates at 200 kV. Transmission electron microscope (TEM) images were taken through a JEOL-2100F microscope. The electrodeposited CF-NiCo 2 O 4 -Ni(OH) 2 product was used as the working electrode with 2.44 mg Ni(OH) 2 active material. The electrochemical tests were conducted in a CHI660D electrochemical workstation in an aqueous KOH electrolyte (2.0 M) with a three-electrode cell where Pt foil serves as the counter electrode and a saturated calomel electrode (SCE) as the reference electrode. The areal capacitance of the electrode materials can be calculated according to the following equation: C~IDt=ADV ð2Þ In which I is the discharge current (A), Dt is the discharge time (s), DV is the voltage window (V) and A is the area of the electrode material (cm 2 ). The energy density (E) and power density (P) are important parameters for pseudocapacitor device. In this study, we also evaluated the energy density and power density of the Ni 3 S 2 /Ni electrodes from the discharge cycles as follows: P~E=t ð4Þ Where C is the areal capacitance (F m 22 ), t is the discharge time (s), and DV is the voltage window (V).
v3-fos-license
2024-06-02T06:17:29.542Z
2024-05-31T00:00:00.000
270173292
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11606-024-08818-3.pdf", "pdf_hash": "eb63b5facb2967e0fd53b4decb2ecee2ed634bb2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:70", "s2fieldsofstudy": [ "Sociology", "Medicine" ], "sha1": "7ed4fd99ded0da706f90c27b06e5dc7d32c3ae7c", "year": 2024 }
pes2o/s2orc
Racial and Ethnic Differences in Health Care Experiences for Veterans Receiving VA Community Care from 2016 to 2021 Background Prior research documented racial and ethnic disparities in health care experiences within the Veterans Health Administration (VA). Little is known about such differences in VA-funded community care programs, through which a growing number of Veterans receive health care. Community care is available to Veterans when care is not available through the VA, nearby, or in a timely manner. Objective To examine differences in Veterans’ experiences with VA-funded community care by race and ethnicity and assess changes in these experiences from 2016 to 2021. Design Observational analyses of Veterans’ ratings of community care experiences by self-reported race and ethnicity. We used linear and logistic regressions to estimate racial and ethnic differences in community care experiences, sequentially adjusting for demographic, health, insurance, and socioeconomic factors. Participants Respondents to the 2016–2021 VA Survey of Healthcare Experiences of Patients-Community Care Survey. Measures Care ratings in nine domains. Key Results The sample of 231,869 respondents included 24,306 Black Veterans (mean [SD] age 56.5 [12.9] years, 77.5% male) and 16,490 Hispanic Veterans (mean [SD] age 54.6 [15.9] years, 85.3% male). In adjusted analyses pooled across study years, Black and Hispanic Veterans reported significantly lower ratings than their White and non-Hispanic counterparts in five of nine domains (overall rating of community providers, scheduling a recent appointment, provider communication, non-appointment access, and billing), with adjusted differences ranging from − 0.04 to − 0.13 standard deviations (SDs) of domain scores. Black and Hispanic Veterans reported higher ratings with eligibility determination and scheduling initial appointments than their White and non-Hispanic counterparts, and Black Veterans reported higher ratings of care coordination, with adjusted differences of 0.05 to 0.21 SDs. Care ratings improved from 2016 to 2021, but differences between racial and ethnic groups persisted. Conclusions This study identified small but persistent racial and ethnic differences in Veterans’ experiences with VA-funded community care, with Black and Hispanic Veterans reporting lower ratings in five domains and, respectively, higher ratings in three and two domains. Interventions to improve Black and Hispanic Veterans’ patient experience could advance equity in VA community care. Supplementary Information The online version contains supplementary material available at 10.1007/s11606-024-08818-3. INTRODUCTION 2][3][4][5] These disparities extend to patient-reported experiences with care, which capture patient-centered measures of health care quality. 6Among VA healthcare system enrollees, research found that Black Veterans reported poorer experiences with care, including communication with physicians, compared to non-Hispanic White Veterans. 7Additionally, research identified disparities in Veterans' experiences with care in VA facilities that disproportionately serve Veterans from underrepresented groups, 8 with Veterans being more likely to report negative experiences with care at VA facilities serving higher proportions of Black and Hispanic Veterans. 9lthough numerous studies examined health care disparities within the VA, less is known about the experiences of Veterans receiving care outside of the VA healthcare system.A growing number of Veterans now receive VA-funded care from community providers (i.e., outside of the VA) because of two major policy reforms: the Veterans Access, Choice, and Accountability Act of 2014 (Choice Act) and the VA Maintaining Internal Systems and Strengthening Integrated Outside Networks Act of 2018 (MISSION Act).The Choice Act expanded Veterans' ability to receive VA-funded care from community providers if they could not obtain timely care within the VA healthcare system or lived far from or experienced hardship getting to a VA healthcare facility.The MISSION Act broadened eligibility for and made these community care programs permanent. 10Over 2.6 million Veterans-nearly one in three Veterans enrolled in the VA healthcare system-were referred to community providers in the 18 months after the passage of the MISSION Act. 10,11eterans are eligible to use community care for a range of primary and specialty care services (e.g., physical therapy, orthopedic care, and ophthalmology), and often use VAfunded community care in addition to care within the VA healthcare system. 11,12ommunity care programs were intended to improve Veterans' access to timely and high-quality health care. 13,14owever, concerns remain about whether Veterans from minoritized racial and ethnic groups have equitable experiences navigating community care and accessing high-quality community providers. 15,16These concerns are especially salient given the pervasive and entrenched factors that give rise to and perpetuate racial and ethnic disparities in health care access, quality, and outcomes nationally. 3,17Although a few studies have examined Veterans' experiences with community care, 18,19 we are not aware of research that quantifies racial and ethnic differences in Veteran-reported experiences with VA community care. The objective of this study was to use data from the VA's Survey of Healthcare Experiences of Patients -Community Care Survey (SHEP-CCS) to examine Veterans' experiences with community care by race and ethnicity between 2016 and 2021. Study Design and Data Sources We conducted an observational analysis of Veterans' ratings of health care experiences in community care settings by race and ethnicity using the VA SHEP-CCS for the period 2016-2021. 20The SHEP-CCS is a mixed-mode (internet/mail) survey administered to Veterans who received VA-funded community care over the prior 3 months.The survey asks Veterans to rate their experiences across nine domains. 19,21,22Sampling for the SHEP-CCS is random within strata, which reflect the type of care received (e.g., primary care, psychiatric care, other subspecialty care).We linked respondent-level data from the SHEP-CCS to data from the Centers for Medicare and Medicaid Services (CMS) to identify Medicare and Medicaid enrollment, the VA Corporate Data Warehouse (CDW) to obtain information on demographics, VA priority group status, health conditions, geography, and the Veterans Integrated Services Network (VISN) where care was received.VISNs are regional divisions of the Veterans Health Administration that manage VA medical centers and other medical facilities (nationally, there are 18 VISNs). 23The VA Pittsburgh Healthcare System Institutional Review Board approved this study. Study Sample SHEP-CCS had a response rate of 30.7% during the study period, in line with other surveys of patient-reported care experiences. 24A total of 233,634 respondents to the SHEP-CCS were identified from 2016 to 2021.We excluded 188 respondents who could not be linked to VA CDW data; 1093 respondents who resided outside of the 50 US states or Washington, D.C.; and 484 respondents without countylevel geographic identifiers (needed to measure county-level covariates).From this sample, we analyzed differences in community care experiences based on ethnicity (n = 16,490 Hispanic and 200,725 non-Hispanic Veterans) and race (n = 24,306 Black/African American and 180,313 White Veterans). Outcomes We examined experiences in nine domains: overall satisfaction with community care, overall rating of the provider, eligibility determination for VA community care, first appointment access, scheduling a recent appointment, provider communication, care coordination, non-appointment access (e.g., after-hours access to providers, waiting time in the office), and billing. 21We followed domain-item groupings for the VA-SHEP survey to combine responses to individual survey items into domain scores (see Appendix for details).Respondent-level scores were constructed as equally weighted means of ratings of domain items.Items were linearly converted to 0-100-point scales before aggregation into scores.Higher scores represent greater satisfaction with care. Independent Variables We analyzed SHEP respondents according to their selfreported race and ethnicity.6][27] We compared community care experiences of Veterans by race (comparing those who identified as Black or African American vs. White, regardless of ethnicity) and ethnicity (comparing those who identified as Hispanic vs. non-Hispanic, regardless of their race).Veterans who identified as Black/African American in addition to other racial groups were analyzed as Black.We did not separately analyze care experiences among racial or ethnic groups with smaller representation in the SHEP, such as American Indian or Alaska Native and Asian Veterans, because smaller sample sizes limited our ability to make meaningful comparisons. Covariates We assessed age, gender, health, insurance and socioeconomic status, rural residence, county-level supply of health care providers, and type of community care received.To measure health status, we used the Elixhauser Comorbidity Index 28 along with separate indicators for the presence of a serious mental illness (i.e., bipolar disorder, major depression, post-traumatic stress disorder, schizophrenia, or psychosis) and substance use disorder (i.e., substance use disorder related to drug or alcohol use).We identified these conditions using diagnoses on Veterans' health care records in the VA CDW (care provided within VA) and in VA Program Integrity Tool files (administrative claims for community care) in the 2 years preceding the SHEP-CCS survey.We measured socioeconomic status and insurance using VA priority group status (which reflects Veterans' income and service-connected disabilities 29 ) and with indicators for enrollment in Medicaid, Medicare, Medicare Part D, and Medicare Advantage. 30,31Insurance is correlated with socioeconomic status and may impact access to care outside of the VA healthcare system.We included county-level measures of urban vs. rural residence (large metropolitan area, small metropolitan area, micropolitan area, and rural) and supply of community physicians per 1000 county residents. 32inally, we included indicators for category of community care received: primary care, subspecialty care, surgical care, eye care, acupuncture, psychiatric care, and other care. Statistical Analyses We plotted unadjusted ratings of community care experiences to examine trends and racial and ethnic differences in ratings over the study period.Next, for each domain score, we ran three sets of respondent-level linear regression models to estimate racial and ethnic differences in community care experiences.We constructed sequential models, guided by the National Academy of Medicine's framework for examining health care disparities.According to this framework, disparities represent racial or ethnic differences that are not explained by group differences in health status or care needs.The framework considers how geographic, socioeconomic, and insurance factors may mediate racial and ethnic disparities in care. 33ccordingly, Model 1 adjusted for demographic factors (age and sex), health status, indicators for the category of community care received, and year fixed-effects.Model 2 adjusted for all variables in the first model as well as rurality and county-level supply of physicians.Model 3 further adjusted for education, along with socioeconomic and insurance factors (included together because many differences in insurance coverage are income-related).Sequential adjustment for covariates allowed us to quantify the extent to which differences in ratings persisted after adjustment for geography, socioeconomic status, and insurance.We also conducted a sensitivity analysis that controlled for individual Elixhauser comorbidities instead of a linear comorbidity index. We conducted a secondary analysis to explore whether Veterans differed in their likelihood of reporting high vs. low ratings of care by race or ethnicity. 9,34For each domain score, we estimated logistic regression models to test for differences by race or ethnicity in the probability of rating care on that domain at or above 90th percentile (high rating) or at or below 10th percentile (low rating).Percentiles were constructed for each patient experience domain among all SHEP-CCS respondents.We estimated marginal differences in the probability that Black vs. White or Hispanic vs. non-Hispanic Veterans reported high vs. low ratings of care, adjusting for all covariates in Model 3. All estimates were weighted to account for survey sampling and non-response using STATA version 15.Statistical tests were conducted using a two-tailed 5% type-I error rate. Sample Characteristics Our sample comprised 16,490 (7.6%) Hispanic and 200,725 (92.4%) non-Hispanic Veteran-year observations, which when weighted represented 1,419,630 and 13,851,118 Veteran-years in the population of community care users, and 24,306 (11.9%)Black/African American and 180,313 (88.1%) White Veteran-year observations, which when weighted represented 2,027,820 and 12,183,563 Veteranyears in the population. In the survey-weighted sample, the mean (SD) age was 54.6 (15.9) years among Hispanic Veterans and 61.2 (15.1) years among non-Hispanic Veterans (Table 1).The mean (SD) age was 56.5 (12.9) years among Black Veterans and 62.0 (15.4) years among White Veterans.The distribution of Elixhauser comorbidities was similar across racial and ethnic groups, although higher proportions of Hispanic and Black Veterans were diagnosed with a serious mental illness (40.0%among Hispanic Veterans and 38.8% among Black Veterans vs. 30.1% among non-Hispanic and 29.0% among Unadjusted Analyses In unadjusted analyses of community care ratings from 2016 to 2021, Hispanic Veterans had lower ratings of care than non-Hispanic Veterans in overall provider ratings, scheduling a recent appointment, provider communication, care coordination, and non-appointment access (Fig. 1A), and billing.Ratings for eligibility determination, first appointment access, and billing were lower overall and did not differ significantly between Hispanic and non-Hispanic Veterans (indicated by overlapping confidence intervals in multiple years).Lower ratings were also observed for Black than White Veterans in overall provider ratings, provider communication, and non-appointment access (Fig. 1B).Black Veterans reported higher ratings navigating eligibility determinations than White Veterans, although ratings in this domain were low for both groups.Both Black and White Veterans reported relatively low and similar ratings for first appointment access and billing.Across domains, mean ratings improved by 2.4 to 6.4 points (on a 100-point scale) from 2016 to 2021.Racial and ethnic differences in ratings of care persisted over time between non-Hispanic and White Veterans vs. those from underrepresented ethnic and racial groups. Adjusted Analyses In Model 1, Hispanic Veterans had significantly lower ratings of their health care experiences in the following domains: overall rating of community care provider, scheduling a recent appointment, provider communication, care coordination, non-appointment access, and billing (Table 2).Black Veterans had significantly lower ratings of care than White Veterans in overall ratings of community care providers, scheduling a recent appointment, provider communication, non-appointment access, and billing (Table 3).For example, overall ratings of community care providers were lower among Hispanic Veterans than non-Hispanic Veterans (difference, − 1.36 points; 95% CI − 1.97 to − 0.75) and among Black Veterans than White Veterans (difference, − 1.71 points; 95% CI − 2.16 to − 1.25).Conversely, Hispanic and Black Veterans had higher ratings of their experiences with eligibility determination and scheduling an initial appointment than non-Hispanic and White Veterans. For most domains of community care experiences, estimated differences in community care ratings by race and ethnicity did not differ appreciably between Models 1, 2, and 3.In Models 2 and 3, both Hispanic and Black Veterans had lower ratings of care than non-Hispanic and White Veterans, respectively, in overall rating of community providers, scheduling a recent appointment, provider communication, non-appointment access, and billing.For example, ratings of non-appointment access were lower among Black than White Veterans (difference in Model 3, − 3.35 points; 95% CI − 3.97 to − 2.73) and Hispanic Veterans than non-Hispanic Veterans (difference in Model 3, − 2.37 points; 95% CI − 3.17 to − 1.58).Expressed relative to the SD of this domain score (26.22 points), these differences correspond to an effect size of − 0.13 SDs among Black Veterans and − 0.09 SDs among Hispanic Veterans. Similar to Model 1, in Models 2 and 3, Hispanic and Black Veterans had higher ratings than non-Hispanic and White Veterans, respectively, of eligibility determination and scheduling a first appointment.However, in Models 2 and 3 (compared to Model 1), we no longer found a significant difference in care coordination ratings between Hispanic and non-Hispanic Veterans, whereas we found a slightly higher g Diagnosis of bipolar disorder, major depression, post-traumatic stress disorder, schizophrenia, or psychosis, assessed from diagnoses reported in VA corporate data warehouse and VA Program Integrity Tool File captured for Veterans during the two federal fiscal years preceding the SHEP survey h Data from the VA CDW.Veterans were categorized by the VA into 1 of 8 priority groups based on military service, disability, income, and other benefit factors.Priority groups 1-4 include Veterans with the most significant levels of service-connected disability.Priority group 5 includes Veterans with low incomes without service-connected disabilities.Priority group 6 includes Veterans seeking care for radiation, toxic substances, or other environmental exposures.Priority groups 7 and 8 include Veterans with non-service-connected disabilities who are required to make copayments for VA care i Data from Medicaid enrollment files linked to VA administrative data and include Medicaid enrollment in the two federal fiscal years preceding the SHEP j Data from Medicare enrollment files linked to VA administrative data and include Medicare Parts A and B (both fee-for-service Medicare and Medicare Advantage Plans) enrollment data from the two federal fiscal years preceding the SHEP l Urbanicity assessed from Department of Agriculture Rural-Urban Commuting Area (RUCA) codes, which were linked to Veterans by residential ZIP code m VA SHEP respondents were stratified by the type of community care received during the prior 90 days.We used detailed information on community care services to group Veterans into 7 categories that differentiate between types of community care received adjusted care coordination rating among Black Veterans than White Veterans (difference in Model 3, 1.4 points; 95% CI 0.75 to 2.06).From Model 1 to Model 3, differences in care coordination ratings between Hispanic vs. non-Hispanic Veterans narrowed from − 1.25 to − 0.14 points, and widened from 0.36 to 1.41 points between Black vs. White Veterans.Our results did not differ appreciably in a sensitivity analysis that controlled for individual Elixhauser comorbidities (Appendix Tables 6 and 7). Adjusted Analyses of High vs. Low Ratings of Community Care Experiences In logistic regression models examining the binary outcome of high vs. low ratings of community care, Hispanic Veterans were less likely than non-Hispanic Veterans to report high ratings of provider communication, non-appointment access, and billing (Fig. 2).Similarly, Hispanic Veterans were more likely than non-Hispanic Veterans to report low ratings of their provider overall, scheduling a recent appointment, provider communication, non-appointment access, and billing.Conversely, Hispanic Veterans were more likely than non-Hispanic Veterans to report high ratings of navigating community care eligibility determination and scheduling initial appointments.Black Veterans were more likely than White Veterans to report low ratings with overall provider experience, scheduling a recent appointment, provider communication, non-appointment access, and billing, and were more likely to report high ratings for eligibility determination and scheduling an initial community care appointment. DISCUSSION This study examined Veterans' self-reported experiences with VA-funded community care from 2016 to 2021 using national data from respondents to the VA SHEP-CC survey.We had three principal findings.First, Black and Hispanic Veterans reported lower ratings of care in five domains compared to White and non-Hispanic Veterans.Specifically, Black and Hispanic Veterans reported lower ratings than their White and non-Hispanic Veteran counterparts in overall provider ratings, scheduling a recent appointment, provider communication, non-appointment access, and billing.These disparities were statistically significant, although they were quantitatively small (equivalent to − 0.04 to − 0.12 standard deviations of domain scores).Second, Black and Hispanic Veterans both reported better ratings than White and non-Hispanic Veterans in eligibility determination and first appointment access.However, Veterans from all racial and ethnic groups rated their care less favorably in these domains compared to other domains.In fully adjusted models, Black Veterans also reported better ratings of care coordination compared to White Veterans.Third, overall ratings of healthcare experiences improved over the study period for all domains.However, observed racial and ethnic disparities persisted over time.These findings add to literature documenting racial and ethnic disparities in Veterans' health care experiences and illuminate the extent to which disparities arise in VA community care.One study found that rural-dwelling Veterans reported worse experiences with community care vs. with care at VA facilities. 18Another study using SHEP data found that Veterans reported better experiences in VA facilities a Analyses of Veterans categorized by ethnicity included all racial groups.Ethnicity was self-reported by SHEP survey respondents b Mean and standard deviations of scores for each domain of community care experiences.Estimates constructed using SHEP survey weights and estimated among all Veterans in our study for the period 2016-2021 c Adjusted differences represent the difference between (1) Hispanic vs. non-Hispanic Veterans (all racial groups) and (2) Black or African vs. White Veterans (all ethnic groups), pooled across study years.Estimates from a respondent-level linear regression model that predicted each domain score as a function of an indicator of Hispanic ethnicity or Black race, adjusting for the covariates as indicated in the table column and year fixed effects.See "METHODS" and Table 1 for descriptions of these covariates.Adjusted differences are linear differences on a 100-point scale. Dividing the adjusted difference by the standard deviation of the score gives the difference relative to the distribution of respondent-level domain scores (i.e., an effect size).Estimates are weighted using SHEP survey weights.95% confidence intervals and p-values calculated using heteroskedasticity-robust standard errors than community settings in most domains, except for access to specialists. 19Our findings are consistent with research that identified disparities in care experiences within the VA healthcare system 7 and among Veterans who used care outside of the VA through insurance programs such as Medicare. 35It is unclear whether the magnitudes of disparities within VA community care are larger or smaller than in the VA; this will be important for policymakers to monitor.Notably, Black and Hispanic Veterans consistently rated their care worse than White Veterans for both overall provider ratings and provider communication.Prior studies of the VA healthcare system and non-VA providers also found racial and ethnic disparities in patient-reported experiences with provider communication. 7,36,37In studies of healthcare interactions, implicit racial bias and negative stereotypes were found to be associated with poorer communication and ratings of care, particularly among Black patients. 38Although the importance of policies to address systemic bias in health care is not limited to VA community care, such policies warrant particular attention in VA community care because of the unique medical and social circumstances surrounding military service. 2 Efforts to promote culturally competent care that improves Veterans' interactions with community-based providers could reduce disparities in these patient experience domains.Our analyses also highlight opportunities for VA to improve certain aspects of community care eligibility and administration.To receive community care, Veterans must navigate complex program rules, requiring proof of eligibility to receive community care based on various criteria such as travel distance.Although VA has established systems to manage authorizations, referrals, and billing, Veterans have reported that these processes can be difficult to navigate. 39hile community care experiences improved over time, Veterans across racial and ethnic groups remained less satisfied with their experiences navigating community care eligibility, scheduling appointments, and billing, compared to their ratings in other domains.Furthermore, while disparities in Veterans' experiences with community care were often small, our analyses often revealed a consistent pattern of inequities. Adjusted differences between Hispanic vs. non-Hispanic Veterans For example, Black and Veterans reported lower mean ratings of providers and communication than White and Veterans.Further, Black Veterans were less likely than White Veterans to report positive experiences and were more likely to report negative experiences in both of these domains.Reducing these disparities-in addition to improving overall levels of community care experience-could improve how VA community care programs serve Veterans. Limitations This study had several limitations.First, because the sample was limited to Veterans who used community care, we were unable to observe factors that could have contributed to racial or ethnic differences in referrals or access to community care.Second, although we studied Veterans from different racial and ethnic groups, we were unable to explore differences in experiences by types of community care received because of sample size limitations.Third, because of small sample sizes, we were unable to examine disparities jointly by race and ethnicity (e.g., among Veterans identifying as Black and Hispanic).Further, due to the smaller representation of Veterans of American Indian, Alaska Native, or Asian backgrounds, we were unable to analyze Veterans from these less common racial and ethnic groups or those with multiracial and multiethnic backgrounds.Fourth, we could not measure provider characteristics or control for provider group effects due to the lack of provider-level data in the SHEP-CCS.Further research is needed to examine provider-level factors, such as racial and ethnic concordance, that may mediate disparities in healthcare experiences. CONCLUSION Black and Hispanic Veterans reported less favorable experiences with VA-funded community care than White and non-Hispanic Veterans respectively in overall provider ratings, scheduling a recent appointment, provider communication, non-appointment access, and billing.Although quantitatively small, observed disparities persisted over time.Interventions to improve Black and Hispanic Veterans' healthcare experience, including in areas related to patient-provider communication, could help advance equity in VA community care and the overall care of the Veteran population. AFigure 1 Figure 1 Unadjusted annual ratings of Veterans' experiences with VA community care, stratified by ethnicity and race, 2016-2021 a .A Annual ratings of care experiences stratified by ethnicity (Hispanic vs. non-Hispanic Veterans) b .B Annual ratings of care experiences stratified by race (Black or African American vs. White Veterans) c . a Graphs display annual unadjusted mean ratings of Veterans' experiences with VA community care by survey domain, assessed from the VA Survey of Healthcare Experience of Patients (SHEP) survey.Mean ratings are stratified by ethnicity (A) and race (B).Estimates are weighted using survey weights.95% confidence bars were calculated using heteroskedasticity-robust standard errors.All scores are linearly transformed onto a common 100-point scale See "METHODS" and Appendix for definitions of survey domains and calculation of scores.b Analyses of Veterans categorized by ethnicity included all racial groups.Ethnicity was self-reported by SHEP survey respondents.c Analyses of Veterans categorized as Black, African American, or White included all ethnic groups.Race was self-reported by SHEP survey respondents. Figure 2 Figure 2 Forest plots of adjusted marginal differences in the probability of reporting high and low ratings of care between Hispanic vs. non-Hispanic Veterans and Black vs. White Veterans with VA Community Care, 2016-2021 a .a Each panel in this plot displays the adjusted marginal differences in the probability that Veterans who identify as Hispanic or Black report positive or negative community care experiences, relative to Veterans who identify as non-Hispanic or White (respectively)).Estimates use pooled data from 2016 to 2021.Estimates are obtained from a respondent-level logistic regression model that predicted each domain probability difference for positive or negative community care experiences as a function of Hispanic ethnicity or Black race, adjusting for all covariates indicated in Table 1 and year fixed effects.From these logistic regression models, we estimated the marginal difference (on the 0-100 percentage point scale) in the probability of the outcome.Positive experiences of care were defined by high ratings equivalent to or higher than the 90th percentile of the domain score distribution (among all SHEP survey respondents in our sample) and study years.Negative experiences of care are defined by low ratings equivalent to or lower than the 10th percentile of the distribution of the domain score (among all SHEP survey respondents in our sample) and study years.Because the distribution of some scores is discrete, a rating equivalent to or higher than the 90th percentile or lower than the 10th percentile may include more than 10% of Veterans. Table 1 Characteristics of Veterans in the Survey of Healthcare Experience of Patients (SHEP) Survey Administered to Community Care Recipients, 2016-2021 a Table shows characteristics of Veterans included in the Survey of Healthcare Experience of Patients (SHEP) survey, which is administered to Veterans who used VA community care in the prior 90 days.Sample characteristics are stratified by ethnicity and race.For continuous variables (e.g., age), survey-weighted means and standard deviations [brackets] are presented.For categorical variables (e.g., gender), unweighted frequencies and survey-weighted proportions (parentheses) are presented.Weighted means and proportions calculated using SHEP survey weights.Characteristics are for pooled data from 2016 to 2021 b Analyses of Veterans categorized by ethnicity included all racial groups.Analyses of Veterans categorized as Black, African American, or White included all ethnic groups.Race and ethnicity were self-reported by respondents to the SHEP survey c Weighted sample size calculated using SHEP survey weights d Self-reported by SHEP respondents e Count of 0-30 comorbidity indicators in the Elixhauser Comorbidity Index.Comorbidities assessed from diagnoses in VA corporate data warehouse (for care delivered in VA facilities) or VA Program Integrity Tool File (for VA-funded community care) during the two federal fiscal years preceding the SHEP survey.All Veterans for whom data on comorbid conditions were utilized were present in the VA system for at least 2 years prior to the survey f Diagnosis of substance use disorder related to drug or alcohol use, assessed from diagnoses in VA corporate data warehouse and VA Program Integrity Tool File for Veterans during the two federal fiscal years preceding the SHEP survey White Veterans).Higher proportions of Black and Hispanic Veterans were enrolled in Medicaid, and smaller proportions had Medicare.Black and Hispanic Veterans were slightly more likely to use community care for acupuncture and psychiatric care but less likely to use community care for eye care and other medical subspecialty care compared with White and non-Hispanic Veterans, respectively. Table 3 Adjusted Differences in Veterans' Experiences with VA Community Care Between Black/African American and White Veterans, 2016-2021 Analyses of Veterans categorized as Black, African American, or White included all ethnic groups.Race was self-reported by SHEP survey respondents b Mean and standard deviations of scores for each domain of community care experiences.Estimates were constructed using SHEP survey weights and estimated among all Veterans in our study for the period 2016-2021 c Adjusted differences represent the difference between (1) Hispanic vs. non-Hispanic Veterans (all racial groups) and (2) Black or African vs. White Veterans (all ethnic groups), pooled across study years.Estimates from a respondent-level linear regression model that predicted each domain score as a function of an indicator of Hispanic ethnicity or Black race, adjusting for the covariates as indicated in the table column and year fixed effects.See "METHODS" and Table1for descriptions of these covariates.Adjusted differences are linear differences on a 100-point scale.Dividing the adjusted difference by the standard deviation of the score gives the difference relative to the distribution of respondent-level domain scores (i.e., an effect size).Estimates are weighted using SHEP survey weights.95% confidence intervals and p-values calculated using heteroskedasticity-robust standard errors a
v3-fos-license
2018-12-27T05:51:00.304Z
2015-04-25T00:00:00.000
133467583
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.maxwellsci.com/announce/AJFST/7-926-930.pdf", "pdf_hash": "15a56d37273b4dc0ce0a41cb555b2d910f6e0550", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:71", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "15a56d37273b4dc0ce0a41cb555b2d910f6e0550", "year": 2015 }
pes2o/s2orc
Study on Food Grain Safety and Countermeasures in Gansu Province Our country is a person much less of the developing countries, guarantee food security is always a priority at all levels of government. Regional food security as constituting the national food security is a very important part of also cannot be ignored. In recent years, Gansu province basically realized the province of grain supply and demand tightness of balance. But by their natural environment is bad, resource shortage, economic development level is low and imperfect system restriction and the influence of such factors as the food security, Gansu province, the grim situation is not optimistic. This study food security in GanSu province research considerations for: From the food safety concept and three evaluation standard (grain production efficiency of its self-sufficiency rate level and grain, grain reserve level) start, the present situation of gansu food security was summarized and analyzed the GanSu food security challenges facing. On this basis, the food security problems on how to solve the GanSu province and puts forward a series of countermeasures. INTRODUCTION Since 1994, the director of the United States World watch Institute, Lester Brown, published entitled 'who will feed China?'on World Watch magazine (Christiansen, 2009).Grain safety problems of China have become topics of interest to scholars at home and abroad, as the total grain output rising gradually, this issue seems to be resolved (Du et al., 2011).Since 1999, however, due to the reduction of arable land, farmers' production enthusiasm down, the impact of grain prices and other factors, there is a general downward trend in grain production in China, the grain security situation is very grim, grain safety problem once again become the focus of attention (Chen, 2006).Since 2003, grain prices continued to rise in China's domestic market, wheat, wheat flour, rice, feed and other manufactured products prices rose significantly in grain-producing regions, this phenomenon has aroused more experts and scholars in another round of discussion on food security issues (Peilong and Juan, 2014). While the majority of scholars are argue that at present China's grain is safe, but we still have to clearly see that the restrictive factors of affecting the food security still exist, some of the deeper structure and mechanism problems still plague us, to ensure grain security remains a top priority for the government at all levels.Since 2004, in the "file first" that the state is published continually, the agricultural issues related with the food security as an important task of the Party and state, in the one hand, it reflects the attention that given to agricultural and rural and in the other hand, China as an agricultural country, the grain security problems cannot be ignored. Gansu province is located in the northwest inland in China, based on agriculture and animal husbandry and this province's economy is less developed.In this province, the socio-economic development is relatively backward, agriculture occupies large proportion in the national economy, the natural environment is poor, the foundation of agriculture is weak and the phenomenon of rural poverty is serious.After years of effort, the Gansu province agriculture got rapid progress and has obtained great achievement.Based on total grain output of 5 million tons in 1974, the output was 6 million tons in 1989, 7 million tons in 1993, 8 million in 1996 and got 8.72 million tons in 1998.In the 2009, the production more than 9 million tons.The production will strive to break through 10 million tons in 2011, basically realized the tightness of grain supply and demand balance in the province.But due to the impact and constraints of the low level of socio-economic development, the imperfect system, Inadequacy of mechanisms and other factors, the situation of grain security in the Gansu province which located in fooddeficit area in China is still grim and not optimistic.Base on a comprehensive understanding in the connotation of grain security, this study analyze the grain production, reserves, levels of self-sufficiency and challenges in Gansu province.And take an empirical analysis of current situation of grain security in Gansu Province, trying to figure out strategies and methods to achieve grain security in the province. MATERIALS AND METHODS With the development of social economy and the progress of science and technology, the connotation of the concept of grain security constantly enrich and the denotation is continually expanding (Byun et al., 1999).Nearly half a century, people awareness of food security has experienced from phenomenon to essence, from local to comprehensive, from quantity to quality of such a process of constantly enrich and deepen. Grain safety problem exists in the whole history of human society, but truly become the focus of people's attention and research, was started in the 1970 of the 20 th century, originally came from the FAO of the United Nations in order to respond to the food crisis, the definition of the concept at that time is: to ensure that any person at any time can get enough needed food in order to survive and health.The basic requirement is to promote governments to pay attention to world food security issues through a series of measures and take appropriate national policies and measures to implement the food Security to "Global Food Security (Barrett, 2002)". In April 1983, FAO developed this concept, pointing out that the essence of food security is "everyone at any time can buy and afford basic foodstuffs."This definition, which focuses on food demand and stresses the production of enough food and the maximum stable food supply, to ensuring that all people in need can get enough food (Ahn et al., 2002).In November 1996, the Rome Declaration on world food security and World Food Summit Plan of action discusses the meaning of food security again, that "only when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and preferences for active and healthy lives, the food security would be able to achieve."This phrase, put forward higher requirements, not only clarified the requirements of the economy, but also increases the meaning of poverty and not only get enough food in quantity but also guarantee safe and nutritious in terms of quality.On the food security objectives, the phrase improve food nutrition and security to a new hierarchy. In September 4-6 th , 2001, the World Food Security Conference hold in Bonn, Germany also mentioned the concept of sustainable food security, introduced sustainable development strategies to the concept of food security.It requires no pollution, pollution-free, to provide consumers with enhanced health, ensuring longevity of grain and other foods. The economics of food production: Food security should be obtained at a reasonable cost.That is on the basis of the right to be able to protect people's basic Fig. 1: Grain production of Gansu (1978Gansu ( -2009) ) survival and the social stability, as much as possible to improve the efficiency of resource utilization, reduce energy consumption and reduce external costs.The economics of food production can be studied from the perspective of resource optimal allocation.With this simple assessment tool, can not only make an initial judgment on the efficiency of food supply, also improve the proportion of investment and quality of production factors under equilibrium conditions.Gansu major grain production cost is not conducive to increasing farmers' income, the main grain of wheat in Gansu unit production cost is 1.12 Yuan/kg, slightly lower than the national average, but compared to the country of high grain production, unit production costs was 2.5 times than Canada.The unit production costs does not have a comparative advantage at all.Further analysis found that the restriction of Gansu grain production costs remain high is mainly due to a large surplus of rural labor force and low level of agricultural modernization.In the composition of the wheat production costs in Gansu province, labor force is 4 times than Canada and mechanical inputs is Canada's 32%.Compared with the mechanized mass production, Dispersed small-scale peasant economy was at a disadvantage obviously. Longitudinal analysis of the overall grain production in Gansu province: Statistics show that food production in Gansu province in 1978 was 4.91 million tons, in 2009 rose to 9.06 million tons (Fig. 1).As can be seen from Fig. 1 1990, 1994. Fifth stage (2000-2009)), continued growth in food production, apart from the decline in 2006, grain production in other years are increased compared with the previous years. Longitudinal analysis of the per capita grain situation in Gansu province: With the improvement of people's income in our country, the residents of food consumption structure will be significant changes, residents' consumption of on rations and vegetable will gradually reduce and consumption of animal foods will gradually rose.This also reflects the dietary quality will improve with the improvement of income level, but food is still as the main material foundation of this improvement.With resources status of intensified, food production whether can meet residents directly consumption of problem will more severe.As can be seen from Fig. 1, the change of per capita grain is similar to the grain yield in Gansu province and also has gone through several stages of development.But generally speaking, to achieve the goal of 400 kg of grain per capita in Gansu province, also needs to go through a difficult process.Especially in the case of shortage of agricultural resources to realize food security is difficult.In 1995, there was a serious drought in the province-wide, drought-affected areas reached 262,000 hm 2 .Per capita food down to the lowest point in the 1990s, only 269.7 kg.In 1998, the per capita grain had reached historic highs, at 351.1 kg and thereafter, as a result of the national implementation of the Grain for Green Project, the area sown to grain had certain amplitude decreases, thus food production and per capita food production has a certain level of decline.After 2000, Gansu province grain production increased by 9 years in a row, but per capita only reached 343.8 kg, while the 400 kg/capita food production index, just the world average level, which is not a high indicator.Taking into account the population will continue to grow, the acceleration of industrialization, as well as the implementation of converting cropland to forest and grassland project, the shortage of arable land and water resources will become increasingly, therefore, the task is daunting to achieve 400 kg/capita food production.Thus, the per capita food production of 400 kg, is not a high level, but a difficult targets. DISCUSSION AND CONCLUSION Food security is relative and restricted by many factors, there is always the potential risks.Gansu Province located in the Inland Northwest, due to the low level of economic development, agricultural production of the natural environment is poor and foundation is weak, coupled with the impact of climate anomalies and other factors, the food safety risks become higher.Therefore, we must strengthen the awareness of food safety, according to the thinking that protection of arable land, increasing inputs, adjusting the structure, relying on science and technology, engage in high-yield, increasing the total amount, responding to the new understanding of domestic and international of the food security, updating the concept of food security, establish the food supply as the core, to ensure market total supply of food and the basic needs of low-income groups, to ensure nutritional security of food supply based on the importance of food security, to form an open, effective and sustainable food security View.On the food supply capacity, not only pay attention to the amount of food safety, but also to the importance of improving the quality and efficiency of configuration resources for food and the development of specialty food.On the supply-demand balance for food, not only to pay attention to regulation of food stocks and reserves, but also to pay attention to the role of the circulation and inter-regional trade.Balance between the food area, not only emphasizes self-seeking balance, but also give full play to the comparative advantages of each region, especially in the areas where grain area, water resources, climate, resources and planting techniques have certain advantages, such as the Hexi area, but also to play the comparative advantages of grain cultivation, increase grain acreage of protection, strict control of food and non-food industry to expand grain production area; Meanwhile, adjusting the structure of varieties of food according to residents' consumption structure change continuously and paying attention to the diverse needs of food safety and food hygiene consumers at different levels, changing from food security to food safety, comprehensive exploitation of food resources, expanding the sources of food. Coordinating to solve the rural issues and improve grain production capacity: Food security is not just food production or circulation problems, but closely related to the level of development of the national economy as a whole, industrial structure, institutional arrangements and national policy and the global economy.Although the basis for food security is in agriculture, solve the problem of food security must go beyond agriculture, comprehensively implement the scientific concept of development, promoting the socialist new countryside construction, comprehensively address associated with food security "three rural" issue, the rural ecological environment has been improved, farmers' income continued to increase and agricultural production capacity increased steadily. To do this, first, we must take all effective measures to develop modern agriculture, transform agriculture with modern science and technology and improve the level of agricultural materials equipment and the level of modernization.Breaking through the limitations of traditional farming as soon as possible and be well versed in food production, food deep processing, logistics, technical support and information services, secondary and tertiary industries, realizing the industrialization of the food business.Secondly, for the fragile ecological environment in Gansu Province, poor natural conditions for agricultural production, agricultural and underdeveloped infrastructure and lack of investment in agriculture and dwindling arable land and other issues.Strengthen water conservancy construction, improve agricultural production conditions and ecological environment, but also implement the most stringent farmland protection system to the existing grain area according the law.At the same time, increasing agricultural input of science and technology and strive to improve grain yield per unit area, improve grain output, promote sustainable development of food production.Third, formulated a series of preferential policies of agriculture for the main problem, doing everything possible to mobilize farmers' enthusiasm for growing grain, so that they really benefit from grain cultivation and eradicate the phenomenon of abandoned land, control of the young migrant workers in the labor force Agricultural production can withstand scale.Finally, increasing farmers' income as the center, continue to advance the adjustment of agricultural and rural economic structure.On the premise of ensuring total food increased, give full play to regional advantages in resources and geographic advantages, suit measures to local conditions, scientific planning and rational distribution, implemented step by step, to speed up grain growing structure, distribution structure and structural optimization and upgrading. Protection of arable land and gradually establish and perfect the system of the protection of basic farmland: Must implement the strictest arable land protection system and stabilize the country's grain sown area at 16.5 million acres.Protect based wood farmland effectively, strictly prohibited indiscriminate occupation of arable land and implement special protection policy on food production base.National and Gansu provincial government need to take out some money, reclaiming and developing arable land designedly.In order to ensure the stability of grain acreage and gradually increase the grain acreage designedly according to the demand. Aiming at the pursuit of balance in the province, improve the food security reserve system: Grain reserves is not only the material basis to protect regional food security, but also the primary means of government to regulate and control the grain market.Frequent natural disasters, greater volatility of agricultural production, coupled with relatively poor transport, agricultural financial difficulties and mostly ethnic minorities in the Gansu province.It is significant to establish appropriate size and rational structure of the food security reserve system.According to the food supply and demand situation and development trend of Gansu province, food supply should be dominated by supply in the province, outside the province transferred as supplemented and self-seeking balance as the target. This will require the establishment of an appropriate scale and rational structure of the food security reserve system, play the role of safety inventory in grain supply and demand balance.According to the State Council on the number of food reserves determined in accordance with "producing 3 months, sales areas 6 months' standards, strengthening the longitudinal grain reserve system and consolidating grain reserve capacity.On the basis of the quality and quantity to complete the task on provincial grain, also according to the local food, the frequency of disasters and so actively promote local reserves and farmers' self-storage mechanism, reasonably determine the per capita grain index and actively carry out scientific grain and promote rural grain storage technology.And gradually increase the level of reserves in rural areas, in order to adapt to the changing of food supply and demand situation.Ensuring the scale and quality of provincial grain reserves and speeding up stale rice rotation job.Meanwhile, strengthen the sense of responsibility and mission of governments at all levels to grain reserves, support and encourage the establishment of grain reserve system in city and county levels, enhance the capacity of local government regulation of the grain market.According to "Grain Circulation Management Regulations" and combined with the actual country, focusing on food enterprises to determine the maximum and minimum inventory standards.Use of state grant preferential policies implementing food reserves, reserve points, reserve costs and standardize management, to take on a range of food control task, for something happens to use and further enhance the "Cistern" regulatory function. ACKNOWLEDGMENT From 2001 to 2003, the per capita grain production in Gansu province have larger fluctuation.The production was 279.04 kg in 2001, 273.18 kg in 2002 and 301.89 in 2003. This study was supported by "under the perspective of water footprint in inland river basin in the northwest resident low water consumption mode study" (No: 41201595), "Research on livelihood risk of farmers in Shiyang River Basin" (No: 41401653), "Research on the Ecological Compensation Mechan-ism of the Inner River Basins of the Northwestern China-A Case Study of Shiyang River Basin" (No: 41171116) and General Research on Social Sciences of Ministry of Education of the PRC China (No: 12YJAZH110).
v3-fos-license
2021-05-14T06:16:53.653Z
2021-05-12T00:00:00.000
234486683
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.3929", "pdf_hash": "2e6d2911e8dde157095181bfccf761344e6e901c", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:72", "s2fieldsofstudy": [ "Medicine" ], "sha1": "12957569cdcf236174757fd684ebec588d8de19d", "year": 2021 }
pes2o/s2orc
The role of comprehensive analysis with circulating tumor DNA in advanced non‐small cell lung cancer patients considered for osimertinib treatment Abstract Background EGFR mutations are good predictive markers of efficacy of EGFR tyrosine kinase inhibitors (EGFR‐TKI), but whether comprehensive genomic analysis beyond EGFR itself with circulating tumor DNA (ctDNA) adds further predictive or prognostic value has not been clarified. Methods Patients with NSCLC who progressed after treatment with EGFR‐TKI, and with EGFR T790 M detected by an approved companion diagnostic test (cobas®), were treated with osimertinib. Plasma samples were collected before and after treatment. Retrospective comprehensive next‐generation sequencing (NGS) of ctDNA was performed with Guardant360®. Correlation between relevant mutations in ctDNA prior to treatment and clinical outcomes, as well as mechanisms of acquired resistance, were analyzed. Results Among 147 patients tested, 57 patients received osimertinib, with an overall response rate (ORR) of 58%. NGS was successful in 54 of 55 available banked plasma samples; EGFR driver mutations were detected in 43 (80%) and T790 M in 32 (59%). The ORR differed significantly depending on the ratio (T790 M allele fraction [AF])/(sum of variant AF) in ctDNA (p = 0.044). The total number of alterations detected in plasma by NGS was higher in early resistance patients (p = 0.025). T790 M was lost in 32% of patients (6 out of 19) after acquired resistance to osimertinib. One patient with RB1 deletion and copy number gains of EGFR, PIK3CA, and MYC in addition to T790 M, showed rapid progression due to suspected small cell transformation. Conclusions NGS of ctDNA could be a promising method for predicting osimertinib efficacy in patients with advanced NSCLC harboring EGFR T790 M. | INTRODUCTION Liquid biopsy utilizing circulating tumor DNA (ctDNA) has become an accessible, non-invasive approach for evaluating genomic alterations in advanced stage cancers. 1,2 Considering tumor evolution in which genomic alternations arise in response to treatment, it is essential to assess emerging genomic alterations that arise after initial therapy so that they may inform decisions about later lines of treatment. 3 At the time of disease progression, performing tumor genomic assessment using plasma is more convenient than repeating a tumor biopsy. Furthermore, because tumor DNA can be shed by all metastatic tumors within the body, ctDNA analysis may better reflect the global status of a tumor's genomic alterations. Our research group has examined whether genomic alterations of non-small cell lung cancer (NSCLC) can be detected by ctDNA analysis, and their correlation with tumor progression, starting with the HASAT study that focused on EGFR T790 M. 4 This gatekeeper mutation of EGFR occurs in 50%-60% of patients with NSCLC who have EGFR activating mutations and who acquire resistance to first and second generation EGFR tyrosine kinase inhibitors (EGFR-TKI). 5,6 Thereafter, cobas ® EGFR mutation test version 2 was approved in Japan as a companion diagnostic test using tissue or plasma for the detection of T790 M when the physician is considering treatment with the third generation EGFR-TKI, osimertinib, which is targeted for T790 M as well as EGFR activating mutations. 7 The cobas test is based on allele-specific real-time PCR, and the detection limit has been reported to be 0.025%−0.15% by analysis using fragmented DNA isolated from lung cancer cell lines bearing EGFR mutations. Because detection of ctDNA is associated with tumor progression, it can also be characterized as a prognostic factor. 4,8 However, it had not been clarified whether liquid biopsy with ctDNA is useful for assessing treatment efficacy. A phase III trial of osimertinib among patients with NSCLC who had tumors harboring EGFR T790 M (AURA 3) clearly demonstrated that liquid biopsy can predict efficacy of osimertinib by revealing T790 M in plasma. 9 However, level of efficacy varied from complete response to primary resistance even among patients in whom T790 M was detected. We hypothesized that co-existing genomic alterations beyond EGFR might impact treatment efficacy and that comprehensive genomic analysis could lead to more precise prediction of treatment efficacy. Here we conducted a prospective, multi-center, observational study to examine the efficacy of liquid biopsy as a predictive marker for the third generation EGFR-TKI, osimertinib. Using banked plasma samples, we retrospectively performed comprehensive genomic analysis with next-generation sequencing (NGS) using Gurdant360, a commercially available NGS assay for ctDNA. 10,11 Our aims were to investigate whether the co-existence of variants other than T790 M is correlated with response to osimertinib and to assess the clinical utility of NGS with ctDNA for better prediction of treatment efficacy. | Study design This was a retrospective analysis using banked plasma samples collected for the S-PLAT study, a prospective, multicenter, observational study to investigate the usefulness of liquid biopsy for predicting the outcome of treatment with third generation EGFR tyrosine kinase inhibitor among patients with advanced NSCLC whose disease progressed after treatment with first or second generation EGFR-TKI. Eligible patients were those with NSCLC having EGFR activating mutations-including G719X, exon 19 deletion, L858R, and L861Q-whose diseases had progressed after treatment with first or second generation EGFR-TKI. Patients were excluded if they were treated with cytotoxic chemotherapy within 14 days of the first dose of study treatment or if they had radiotherapy within 4 weeks of the first dose of study treatment. Patients having a history of treatment with osimertinib or immune checkpoint inhibitors were also excluded. The patients in whom T790 M was confirmed by an approved companion diagnostic test, cobas ® EGFR Mutation Test v2, were treated with osimertinib. The specimens tested by cobas were tissue, plasma, or both, depending on each physician's choice. Comprehensive molecular analysis was performed with Guardant360 on ctDNA extracted from plasma collected before osimertinib treatment and again at the time of disease progression. The primary objective of this study was to determine whether tumor responses, such as overall response rate (ORR) and progression-free survival (PFS) under osimertinib among patients who are positive for T790 M in ctDNA, using the mutation-biased polymerase chain reaction (PCR) and quenched probe (MBP-QP) method (a highly sensitive mutation system developed in our laboratory), are equivalent to those from historical data based on cobas testing of tumor tissue in the AURA study. 12,13 In this paper, we focused on the exploratory objectives, which were to assess the association of ORR and PFS to allele fraction (AF) of T790 M or other variants detected by NGS with ctDNA. Response evaluation by imaging was recommended every 8 weeks, and performed according to the Response Evaluation Criteria in Solid Tumors (RECIST) ver.1.1. This study was conducted in accordance with the Declaration of Helsinki and approved by the ethics committees of all participating facilities represented by Saga University. Written informed consent was obtained from all participants. The study was registered at UMIN-CTR (UMIN000025930). | Molecular analysis with ctDNA From each patient, prior to the start of osimertinib treatment, 10 ml of peripheral blood was collected into a blood tube containing 3.8% citrate. Blood was centrifuged at 3,000 rpm for 20 min at 4℃ to collect 4 ml of plasma, and ctDNA was extracted with a Maxwell RSC® ccfDNA plasma kit (Promega, WI, USA). 14,15 At the time of disease progression, peripheral blood was collected in two 10 ml Cell-Free DNA BCT® tubes (Streck, NE, USA). Extracted ctDNA or peripheral blood was shipped for Guardant360 analysis (Guardant Health Inc., CA, USA). The cobas plasma test was performed by designated testing companies (SRL Inc., Tokyo, Japan; LSI Medience Corporation, Tokyo, Japan; BML Inc., Tokyo, Japan). | Statistical analysis The association between treatment efficacy with osimertinib and EGFR mutation AF by NGS was tested with Pearson's χ 2 test. The survival rate was calculated according to the Kaplan-Meier method and the log-rank test was used for assessing differences. The comparison between early resistance and non-early resistance on clinical and genomic parameters was tested with the χ 2 test for categorical data and the nonparametric Mann-Whitney U test for continuous data. For multivariable analysis, a logistic regression model was applied with explanatory variables that were statistically significant (p ≤ 0.20) in the two-group comparison test. Odds ratios (OR) with 95% confidence intervals (CI) were estimated. The AF difference between pre-treatment and after progressive disease was assessed with the nonparametric Wilcoxon signed-rank test. Statistical significance was declared if p < 0.05. Statistical analysis was conducted with SPSS version 19 (IBM SPSS Statistics, IBM, Tokyo, Japan). | Study flow and patient characteristics The flow of this study is shown in Figure S1. Eligible patients were registered from 28 Japanese hospitals between February 2017 and January 2019. Although 153 participants were enrolled, 6 of them were withdrawn due to worsening general condition or difficulty with tissue sampling. Samples from the remaining 147 patients underwent cobas analysis with tissue re-biopsy (n = 72), ctDNA (n = 60), or both (n = 15) as companion diagnostics for osimertinib ( Figure S2A), and 60 patients were shown to harbor T790 M. T790 M was detected in 52.9% and 24.0% with tissue and ctDNA samples, respectively, using cobas (p = 0.0002, χ2 test; Figure S2B). Three patients were not treated with osimertinib because they declined treatment or met one of the exclusion criteria, such as hepatitis B antigen positivity, leaving 57 patients who were treated with osimertinib (Table 1). During the follow-up period, 36 patients' diseases progressed during osimertinib treatment. The median age of all 57 osimertinib-treated patients was 72 (range 42-88) years, and the majority were female (68%), had never smoked (74%), and had stage IV or recurrent tumors after surgery (84%). Extrathoracic metastases were observed among 53% of the osimertinib-treated patients. EGFR activating mutations included exon 19 deletion among 53% and L858R among 47%. T790 M detected by cobas with plasma was found among 16 (28%) and with tumor tissue among 44 (77%), including 3 patients with T790 M detected in both tissue and plasma. Fifty-six patients were evaluated for response to osimertinib; one patient developed a cerebral infarction (unrelated to treatment) and could not be evaluated. Among all patients treated, the ORR to osimertinib was 58% (33 of 57), with disease control rate (DCR) 91% (52 of 57). Median PFS was 14 months (95% CI 9.863-18.137) and median follow-up period was 24 months (range 11-35). | NGS analysis with ctDNA before treatment with osimertinib To investigate the potential influence of oncogenic mutations in addition to EGFR, we analyzed, with a comprehensive NGS platform (Guardant360), 55 available plasma samples from 57 patients, where the samples had been banked before treatment with osimertinib ( Figure S1). NGS was technically successful in 54 out of the 55 samples (98%), and ctDNA was detected in 52 of the 54 samples that had complete analysis (96% detection rate). The findings of each patient are summarized in Figure 1. Synonymous mutations and variants of unknown significance (VUS) were excluded. In addition to EGFR single-nucleotide variants (SNV), copy number variants (CNV) in EGFR, ERBB2, and cell cycle-related genes were detected along with SNV and insertions/deletions (Indels) in genes such as TP53, PIK3CA, CTNNB1, and SMAD4. EGFR driver mutations were found in 43 (80%) and T790 M in 32 (59%) of the 54 samples ( Figure 2A). The median AF of T790 M and EGFR activating mutations was 0.216 (0-45.78) and 1.05 (0-89.6), respectively ( Figure 2B). Patients were divided into three groups according to pretreatment level of T790 M AF: G1 (not detected), G2 (below median), and G3 (median and above). The ORR were 41% (9 of 22), 63% (10 of 15), and 75% (12 of 16) in G1, G2, and G3, respectively, but the trend was not statistically significant (p = 0.150, Figure 3B). However, when grouping was based on the ratio (T790 M AF)/(sum of variant AF), ORR increased significantly according to group: G1 (41%, 9/22), G2 (53%, 8/15), and G3 (81%, 13/16) (p = 0.044, Figure 3A). Analysis of EGFR activating mutation AF or the ratio (EGFR activating mutation AF)/(sum of variant AF) showed no statistically significant correlation with ORR ( Figure 3C ratio (T790 M AF)/(sum of variant AF), a statistically significant difference was also not observed (p = 0.582, Figure S4). We further analyzed the outcomes that were based on clinical responses to osimertinib, with patients classified as "nonearly resistance" (PFS >90 days) or "early resistance" (PFS ≤90 days), because the range of PFS among the PD patients whose PFS was 15-90 days. Figure 4 shows the relationship between the number of genomic alterations detected prior to treatment with osimertinib and the therapeutic response to osimertinib. The number of oncogenic SNVs and Indels (including VUS) and that plus CNV were both higher in "early resistance" patients, with statistically significant differences (p = 0.036 and p = 0.025, respectively, Figure 4A, B). Even with VUS excluded, the sum of SNVs/indels and CNVs was higher in "early resistance" than in "non-early resistance" patients (p = 0.028, Figure 4C). When "early resistance" and "non-early resistance" patients were compared in terms of various clinical factors, the number of oncogenic SNVs and Indels (including VUS), with or without inclusion of CNVs, and EGFR activating mutation type showed statistically significant differences between them (Table 2). Multivariable analysis of the association of "early resistance" to osimertinib showed a statistically significant association with SNV/ Indels plus CNVs (Table 3). | Analysis of acquired resistance to osimertinib based on NGS with ctDNA NGS analysis was also performed after disease progression under osimertinib treatment and compared to the NGS profiles before treatment in order to assess acquired resistance mechanisms. Plasma samples could be collected and NGS successfully performed in 20 of the 36 patients who developed disease progression during osimertinib treatment; variants were detected in 19 of these 20 patients. The AF of T790 M was significantly lower after disease progression, but not that of EGFR activating mutations (p = 0.036 and Figure 5). In 18 of 19 cases, T790 M was not detected after disease progression, and T790 M was neither detected before treatment nor after disease progression in 10 cases. Table 4 shows the frequencies of T790 M loss (change from presence to absence) and variants newly observed. T790 M was lost in 37% of patients (7 out of 19). New alterations included EGFR minor mutations (S752C and S306L) in two patients, MET SNV/CNV in two patients, TP53 SNV in three patients, PIK3CA SNV in two patients, cell cycle-related genes SNV/CNV in two patients, and MYC CNV in one patient. Other variants included SNVs in the genes NF1, CDK12, ARID1A, CTNNB1, DDR2, and APC, and CNVs of CDK6, CCNE1, FGFR1, and BRAF. There was no case with EGFR C797S. We also observed a patient with possible small cell transformation, with suspicion triggered by the finding from ctDNA NGS profiling ( Figure 6). This patient, a 61-year female non-smoker with EGFR L858R, was treated with osimertinib after disease progression on gefitinib, her fifth line of treatment for NSCLC. Prior to osimertinib treatment, the NGS assay detected EGFR L858R (AF 22.2%) and T790 M (AF 0.5%) along with RB1 c.1774_1814+12del (AF 18.6%) and copy number gains of EGFR, PIK3CA, and MYC, but no TP53 mutation. Shortly after the patient was started on osimertinib, the tumor rapidly progressed. Although EGFR T790 M became undetectable, AFs of both EGFR L858R and the RB1 indel increased. Her physician suspected small cell transformation, but a tissue biopsy could not be obtained because of the patient's rapid deterioration. Neuron-specific enolase (NSE) was found to be elevated in serum, and a regimen for small cell carcinoma (carboplatin plus irinotecan) was initiated. This regimen caused shrinkage of the right pleural dissemination along with a decrease in NSE. Brain metastasis occurred after three courses of the regimen and was followed by deterioration of the pleural dissemination. Despite treatment with the anti-PD-L1 antibody atezolizumab, the patient died 2 months later. | DISCUSSION Osimertinib for treatment of NSCLC is a potent EGFR-TKI for previously treated patients as well as in the first-line setting. [16][17][18] However, some tumors demonstrate primary resistance: 6% among previously treated patients and 1% among those treated in the first line. EGFR activating mutations are known to be strong tumor drivers, but in addition to EGFR mutations, a diversity of genomic profiles has been reported recently on the basis of multi-region whole-exome sequencing. 19 Genomic variants outside of EGFR may impact treatment efficacy of EGFR-TKI. Recent reports based on NGS showed that STK11 mutation co-existing with KRAS-mutated NSCLC caused refractory response of immune checkpoint inhibitors (ICI). 20 In patients with prostate cancer, BRCA2 and ATM defects, TP53 mutations, and AR gene structural rearrangements were strongly associated with poor clinical outcome of patients treated with androgen receptor-directed therapies. 21 Considering these results, we hypothesized that co-existence of EGFR mutations with other variants might cause primary resistance to, or weak efficacy of, EGFR-TKI. At first, we evaluated the relationship between T790 M AF and treatment efficacy of osimertinib. Since osimertinib shows more potent inhibition against T790 M than against EGFR activating mutations, 22 we expected that osimertinib would have greater efficacy in cases with high T790 M AF, but the results showed no significant difference. However, T790 M AF divided by the sum of AF of total variants was significantly associated with ORR. The amount of ctDNA has been known to be associated with tumor burden or distant metastasis. 4,8 Therefore, it is assumed that T790 M AF corrected by total AF could represent the proportion of T790 M among whole tumor in each patient. A similar result has been reported in that T790 M purity shown by the ratio of T790 M AF to maximum somatic AF was associated with osimertinib efficacy. 23 However, our results showed that the relationship between the (T790 M AF)/(maximum variant AF) proportion and ORR was not statistically significant (p = 0.10, data not shown). In this study, NGS using Guardant360 revealed various co-existing genomic alterations prior to osimertinib treatment, possibly because many patients had received several treatment regimens prior to enrolling. The number of genomic alterations (the sum of SNVs and indels with or without CNVs) was significantly higher in "early resistance" patients. One explanation for these results is that additional variants arising after modification of previous treatments might cause activation of alternative pathways or cross talk with the main pathway. Even in the early stages of lung cancer among patients with EGFR mutations, a high number of truncal mutations and overall mutation burden were significantly related to shorter overall survival. 19 However, we could not find any reports about the relationship between efficacy of treatment including osimertinib and the number of whole variants in addition to EGFR mutations. TP53 mutation has been reported to impact clinical outcome by facilitating genomic instability, 19 but we did not observe any relationship between response to osimertinib and specific co-occurring variants in genes such as TP53 or PIK3CA. According to Jaml-Hanjani's report on tumor evolution of NSCLC, genomic doubling caused intra-tumor heterogeneity of copy number alterations and mutations, and that was associated with poor outcome. 3 The NSCLC patients with high copy number alterations observed in subclonal trajectory showed shorter disease-free survival than those with low copy number. Therefore, the number of variants, rather than the presence of specific alterations, could be associated with impaired treatment efficacy. As shown in the present paper, recent technology has enabled evaluation of total variants including SNVs and CNVs with ctDNA, and it is worth investigating the association between total variants measured with ctDNA and efficacy of EGFR-TKIs such as osimertinib. Mechanisms of acquired resistance to osimertinib are known to be heterogenous, including EGFR C797X; loss of T790 M; SNV of PIK3CA, KRAS, and BRAF; and amplification of MET. 24,25 In addition to these variants, epithelial-tomesenchymal transition, manifested as small cell carcinoma (SCLC) transformation, is not an infrequent cause of EGFR-TKI resistance. In our cohort, we detected one patient whose cancer we suspected of being SCLC transformation, on the basis of tumor markers and treatment efficacy of a regimen for SCLC; however, it could not be confirmed by pathological analysis, which is necessary for diagnosis. [26][27][28][29] Molecular analysis with whole-genome sequencing has shown that inactivation of RB1 and TP53 can be observed in advanced stage EGFR-mutated NSCLC. RB1 loss is known to occur in 100% of SCLC transformation cases, and MYC amplification is associated with poor prognosis. [29][30][31] In our suspected case of SCLC transformation, we detected both RB1 deletion and MYC amplification before treatment with osimertinib, and these alterations expanded because of resistance to osimertinib. A further benefit of molecular analysis with ctDNA is, therefore, that it might raise the clinical suspicion of SCLC transformation, triggering a tissue biopsy to guide appropriate therapy. In patients with NSCLC whose disease has progressed during first or second generation EGFR-TKI treatment, detection of EGFR T790 M supports the decision to initiate osimertinib treatment. As the quality of ctDNA detection methodologies has improved, most patients with disease progression no longer require re-biopsy for evaluation of actionable tumor biomarkers. One limitation of our study is the F I G U R E 4 The relationship between the number of genomic alterations detected prior to treatment with osimertinib and the therapeutic response to osimertinib is shown. Six patients, out of 54 among whom NGS was technically successful, were excluded from the analysis because of discontinuation due to adverse effects or patient's request. (A) The number of oncogenic SNVs and Indels including VUS. (B) The sum of SNVs/ indels and CNV including VUS. (C) The sum of SNVs/indels and CNV excluding VUS. Non-responders and responders had PFS of 90 days or less, and more than 90 days, respectively. Comparisons between non-responders and responders were made with the nonparametric Mann-Whitney U test. SNV/Indel, single-nucleotide variants and/or insertion/deletions; CNV, copy number variants; VUS, variants of unknown significance small study group size; another is that tumor response and PFS were assessed by the investigators, not by independent central reviewers. In addition, our study was limited by the fact that ctDNA analysis of EGFR alterations was conducted in a population of patients with known T790 M, but no cases with wild-type EGFR, and therefore does not reflect the Six patients, out of 54 among whom NGS was technically successful, were excluded from the analysis because of discontinuation due to adverse effects or patient's request. T A B L E 3 Multivariable analysis of association with early resistance to osimertinib clinical setting where there is no prior information regarding mutations of interest. However, our study demonstrates that the plasma-based comprehensive genomic panel is a practical tool for precise prediction of treatment efficacy by detection of total number of variants in addition to driver mutations, and for analysis of potential mechanisms of resistance to TKIs. Although osimertinib treatment is still recommended for patients with EGFR activating mutations and T790 M, we can be prepared to change treatment strategy quickly for patients suspected of being early resistance. In addition, the pathological significance of co-existent variants with EGFR mutations needs to be investigated, leading to a new treatment strategy in combination with EGFR-TKI. A further clinical trial using plasma NGS could confirm an expanded role for ctDNA to allow better identification of patients with NSCLC who are most likely to benefit (or patients who are most likely not to benefit) from targeted therapies such as osimertinib. ACKNOWLEDGMENTS The authors appreciate the patients, investigators, nurses, and study coordinators for their generous participation and contribution. The study was registered at UMIN-CTR (UMIN000025930). nab-PTX+BEV Treatment personal fees from Eli Lilly, personal fees from Pfizer, personal fees from Taiho Pharmaceutical, outside the submitted work. Dr. Inoue has nothing to disclose. Dr. Takamori has nothing to disclose. Dr. Kawaguchi has nothing to disclose. DATA AVAILABILITY STATEMENT The data that support the findings of this study are available from the corresponding author upon reasonable request. Anonymized data will be made available on request in accordance with institutional policies.
v3-fos-license
2023-02-25T14:56:27.158Z
2022-06-29T00:00:00.000
257161705
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://sustainenvironres.biomedcentral.com/counter/pdf/10.1186/s42834-022-00143-w", "pdf_hash": "8de55b557c2eac7c86ab0e68d48c1d9ca7ce94c5", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:73", "s2fieldsofstudy": [ "Environmental Science", "Chemistry", "Materials Science" ], "sha1": "8de55b557c2eac7c86ab0e68d48c1d9ca7ce94c5", "year": 2022 }
pes2o/s2orc
Combination of rGO/S, N/TiO2 for the enhancement of visible light-driven toluene photocatalytic degradation Toluene is one type of common volatile organic compounds that is harmful to human health. Therefore, the degradation of toluene is critical to improving air quality value. Performance improvement of TiO2, a typically applied photocatalyst, has advantages in light absorption and electron transfer process. In this study, the TiO2 catalyst was improved by the doping of reduced graphene oxide (rGO), sulfur, and nitrogen (S, N) elements. The highest toluene photocatalytic degradation was performed under the composition of 1wt%rGO/S0.05N0.1TiO2. Improvement in photocatalytic activity was achieved by higher specific surface area, formation of oxygen-containing functional group, and chemical defect structure. However, a higher amount of rGO addition creates the shielding effect and inhibits the light penetration. Moreover, the relative humidity and applied temperature influence the photocatalytic activity through the competitive adsorption or increase the collisions frequency, respectively. During the photocatalytic degradation using 0.1wt%rGO/S0.05N0.1TiO2, toluene will be converted into benzyl alcohol, benzaldehyde, benzoic acid, water, and carbon dioxide. Introduction An average 86-87% of human activities were spent in the indoor environment, thus the quality of air environment is important [1]. Various types of volatile organic compounds (VOCs), including toluene, have the potency to reduce the indoor air quality. Toluene as an aromatic hydrocarbon is commonly generated in various coalburning products (from liquefaction process or coal aromatics) and other products like catalytic reforming products or the petroleum fractions after the steam cracking process [2]. Moreover, common daily human activities and materials like fuel combustion, application of cooking gas, wood furniture, cleaning agent, carpet, or pesticide also can be the source of toluene [3]. Toluene is considered a carcinogenic agent and also harmful to the nervous system, liver, kidneys, and lungs, therefore, its removal is critical for human health [4]. Various strategies have been involved in the VOC control technologies like thermal, biological, or catalytic oxidation, condensation, adsorption, and absorption. However, most of the strategies have the potency to create the secondary pollutant, require high cost and energy, or are not effective for the low concentration pollutant [3,5]. Photocatalytic degradation is a potential strategy for the removal of indoor air pollution with low cost and energy consumption, efficient for the low pollutant concentration. A widely applied photocatalyst, TiO 2 , has been implemented for the decomposition of organic compounds or even in the water remediation, super capacitor, porous adsorbent supports, and sensor devices [6]. However, under the visible-light irradiation, TiO 2 application was limited along with its Page 2 of 18 Winayu et al. Sustainable Environment Research (2022) 32:34 issue on the rate of photogenerated electron-hole pairs recombination. TiO 2 has a wide bandgap of 3.2 eV, thus, the photocatalytic activity was driven by the UV illumination under the wavelength below 388 nm. Moreover, during the reaction, reduction of photocatalytic activity may occur due to the high recombination rate in electron-hole pairs that leads to the low electron concentration in conduction band. Therefore, interest has been gained for the improvement in TiO 2 photocatalytic activity through modification [6]. Metal element doping on the TiO 2 surface increases the capability to absorb visible lights through the formation of traps for photo-induced electron or holes that leads the electron to a reduction state during the photocatalytic process [7]. On the other hand, recombination inhibition for the TiO 2 photogenerated electron-hole pair will be carried out by the non-metallic element doping [8]. Replacement of TiO 2 lattice oxygen can be carried out by sulfur element as an anion. Furthermore, new energy states in TiO 2 band gap may occur with the carbon element doping which will substitute the oxygen [9]. Moreover, the surface oxygen vacancies is achieved by the nitrogen element doping on TiO 2 photocatalyst material [10]. TiO 2 doping with carbon material is a promising strategy due to its low cost, good conductivity property, and high absorbance of light that will improve the efficiency of pollutant photocatalytic degradation [11]. Furthermore, graphene has been observed for its enhancement on TiO 2 photocatalyst through high electron mobility. The lower Fermi level of graphene compared to the minimum conduction band of TiO 2 provides suitable utilization as the electron sink. Therefore, the interface charge separation and the inhibition of photogenerated electron-hole recombination will be facilitated by graphene. Moreover, in the TiO 2 -graphene photocatalyst, the electron transfer from TiO 2 surface to graphene causes a visible-light absorbance extension and the recombination inhibition of charge carrier [12]. This study observed the enhancement of TiO 2 photocatalytic activity through doping of non-metallic elements (S, and N), and various concentrations of reduced graphene oxide (rGO). Toluene was chosen as the targeted pollutant for the study of indoor air pollution control using photocatalytic degradation. Moreover, the study of influence from various environmental conditions was also carried out along with the kinetic study and proposed degradation mechanism. Thus, this study provides comprehensive explanation including properties, performance, kinetics, and pollutant degradation mechanism of the TiO 2 photocatalyst improvement via doping of nonmetallic elements and low-cost material. Production of nanocomposite photocatalyst Graphite oxide was prepared previously using the Hummer method [13]. Continuous stirring was applied for the mixture of graphite flakes (1 g), NaNO 3 (0.5 g), and H 2 SO 4 (23 mL) under ice bath conditions for 1 h. 3 g KMnO 4 was then slowly added under the stirring and the temperature that kept at below 20 °C for 30 min before moving to the higher temperature water bath 35 °C and continuing the stirring process for 24 h. Subsequently, water (46 mL) was added into the above mixture at 95 °C during a period of 1 h. Finally, the suspension was diluted by adding 130 mL water and 12 mL H 2 O 2 (35 wt%) at room temperature and stirring was continued for another 30 min. The removal of un-exfoliated graphite oxide was carried out by repeated centrifugation-rinsing cycle using 5-10% HCl and water. The final product was then freeze-dried for storage. Photocatalyst was prepared under the solvothermal method where titanium (IV) isopropoxide (TTIP) as the Ti-precursor, thiourea (CH 4 N 2 S) as the sulfurnitrogen-precursor (S, and N), and graphite oxide were used. S, and N elements were prepared with the amount of 5 mol% and 10 mol%, respectively for all various of observed graphite oxide concentration. Under the ultrasonic oscillation for 1 h, prepared graphite oxide (0.039 g) was added into ethanol (20 mL) for the formation of graphene oxide. Various concentrations of graphite oxide will be added in the ratio in the range of 0.01-1 wt%. Separately, 15 mL TTIP and 0.186 g thiourea were added into 20 mL ethanol. Both graphene oxide and TTIP-thiourea solution will be further mixed and supplemented by 100 mL deionized water under 30 min continuous stirring. Afterward, an adequate amount of nitric acid (65 wt%) was added to let the pH of the sol remain at 2 and be further stirred until the gel formed at 70 °C for 30 min. The produced gel will be treated in Teflon-lined stainless steel and autoclaved for 12 h at the temperature of 180 °C. Drying process at 70 °C was applied as the final step in photocatalyst preparation before the storage and further application in photocatalytic activity test. Characterization of photocatalyst Various analyses to observe the physical and chemical properties of the synthesized photocatalyst were carried out. Phase transformation analysis of the photocatalyst was carried out using thermogravimetric/differential thermal analysis (TG/DTA) (Pyris Diamond TG/DTA, Perkin Elmer) under controlled temperature and heating rate in the range of 50-850 °C and 10 °C min − 1 , respectively. Determination of the dominant crystalline phase was observed with X-ray powder diffraction spectroscopy (XRD) (Rigaku X-ray Diffraction Model D/MAS IIIV). Detailed information of crystal structures was analyzed by MDI Jade 5.0. XRD analysis was carried out using Cu Kα radiation in the range of 5-80 o with a scan rate of 2 o min − 1 . Estimation of the particle sizes and interplanar distance of the photocatalyst were calculated by the Scherrer equation and Bragg's law. Furthermore, to support the crystallite properties analysis, Raman spectroscopy (Tokyo Instruments, Nanofinder 30) was also applied for the determination of degree ordering and crystallinity. Porous properties and specific surface area analysis were observed using Brunauer-Emmet-Teller (BET) analysis using nitrogen multilayer adsorption method. Scanning electron microscopy (SEM) (JEOL JSM-6700F) was used for the photocatalyst surface properties and transmission electron microscopy (TEM) (JEOL-2100F CS STEM) was used to analyze the crystal structure through its diffraction pattern. The molecular structure of the chemical functional groups of organic compounds in the photocatalyst was observed using Fourier transform infrared spectroscopy (FTIR) (Perkin Elmer Spectrum One B) equipped with a KBr beam splitter. Deeper observation on the photocatalyst band gap was carried out by the UV-Visible spectrometry (Lamda 35, Perkin Elmer) equipped with an integrating sphere. Analysis was carried out at room temperature and the standard atmospheric air pressure with the wavelengths range of 250-700 nm was set. Furthermore, an X-ray photoelectron spectroscopy (XPS) (ULVAC-PHI 500 Versa Probe ESCA) with the Al Kα-radiation (1486 eV) as the source was used for the observation of the binding energy analysis with the calibration reference of C 1 s at 285 eV. Synthesized photocatalyst for the degradation of toluene Photocatalytic degradation study was carried out in the plug flow reactor system under the temperature of 25 °C, 60% relative humidity (RH), 30 s resident time, and 2.0 ppm initial toluene concentration as the targeted pollutant. Toluene was supplied in a simulated gas system that consist of a mixer, cylinders, and syringe pump with the air (O 2 :N 2 = 21:79) for the purging process. Synthesized photocatalyst (0.5 g) mixed with 10 mL anhydrous alcohol under continuous stirring for 24 h before coated on the inner surface of a pyrex glass reaction tube with the size of 15.5 cm length, 7.5 cm interior diameter, and 0.5 cm thickness. The oven-drying process will be applied to the pyrex tube with a temperature of 20 °C and installed in the plug flow reactor system. In order to create a firm reactor setup and prevent the leakage or interference from the other light source, the reactor was placed in a stainless-steel structure. Vaporized toluene was supplied to be photocatalytic degraded by the synthesized photocatalyst using a commercial fluorescent lamp (FL-10D, 10 W, wavelength range of 360-700 nm with a peak λ = 436 nm and the light intensity was 2.5 mW cm − 2 ) as the light source in the middle of pyrex tube. Light intensity was measured using a luciferase spectrometer (Jasco FP-6200). The concentration of toluene was monitored online using gas chromatography-flame ionization detector during the photo-degradation process. Photocatalyst with the best toluene degradation performance will be further applied in the photocatalytic activity under various environmental conditions, including initial toluene concentration (1.0-4.0 ppm), temperature (25-45 °C), resident time (5-30 s), and RH (5-80%). Due to the detection limitation in the FTIR instrument, during the experimental study for photocatalytic performance, molecular analyzer (AGM 4000, JSD131108-1) was applied for the CO 2 measurement with the detection limit of 1%. In the kinetics study, the Langmuir-Hinshelwood kinetic models (Table 4) were applied for the determination of rate constants and adsorption equilibrium constants followed by fitting experimental data using polymath 6.10 software. The value from best fitted model would be applied in the determination of reaction rate constant under the effect of temperature using Arrhenius equation according to the following formula: where k' is the temperature-independent rate constant (mol cm − 3 s − 1 ), Ea is the activation energy (kJ mol − 1 ), R is the gas constant (kJ mol − 1 K − 1 ), and T is the temperature (K). Moreover, under the assumption of (i) the temperature-dependent of the Langmuir adsorption constant can be used to determine the monolayer adsorption on a homogenous surface and (ii) the rate constant exhibits a temperature dependence that follows the Arrhenius law, therefore, the temperature dependent adsorption constant can be determined according to the following formula: where K is the adsorption equilibrium constant (cm 3 mol − 1 ), K′ is the temperature-independent adsorption equilibrium constant (K 1/2 cm 3 mol − 1 ), and ΔH is the enthalpy change of reactants (kJ mol − 1 ). Photocatalyst properties Analysis of crystallite properties using XRD was presented in Fig. 1a and Table 1 along with supportive data in Fig. S2. A wide peak at around 25.5° indicates the presence of rGO. Moreover, the production of rGO via solvothermal method was also noticed by the disappearance of the (0 0 2) plane at 11.1° [14]. the overlap characteristic peak (002 plane) of rGO at 25 o with 101 plane reflection of bare TiO 2 and rGO/TiO 2 at around the same 2θ value creates the almost coincident diffraction patterns [15]. Moreover, no sulfur phase or S containing compounds in the sample with S element in XRD pattern was observed due to the small amount of doping. The particle sizes of S, N, and graphene doped TiO 2 vary with various doping amounts. However, the lattice parameters a and c remain constant. It indicates that photocatalytic properties of the nanoparticles and charge balance in the anatase lattice are not affected by the dopant presence [16]. Figure 1b displays the FTIR spectra results with the peak appearance at 1650 cm − 1 due to the C=C stretch of alkenes. The presence of TiO 2 can be indicated by the slope between 500 and 1000 cm − 1 due to a Ti-O-Ti vibration. The chemical bond between rGO and TiO 2 nanoparticles created a peak around 793 cm − 1 due to the vibration of Ti-O-C bond, thus, large range from the vibration of Ti-O-Ti and Ti-O-C are hardly distinguished [15]. The stretching mode of S=O and stretching vibration of S-O formed the observed peaks at 1257 and 1049 cm − 1 , respectively [17]. However, the presence of bands from the oxygen-functional groups of graphene oxide in the photocatalyst spectra indicates the incomplete reduction of graphite oxide under the solvothermal method [18]. The Raman spectra of the synthesized photocatalyst was displayed in Fig. 1c. In the Raman spectra, the D band at 1350 cm − 1 is assigned to edge or in-plane sp 3 defects and disordered carbon. On the other hand, the in-plane vibration of ordered sp 2 -bonded carbon atoms was presented as the G band at 1600 cm − 1 [19]. Moreover, compared to graphite oxide, the elevation of I D / I G intensity ratio was detected in the reduced graphene oxide sample due to functional groups removal which led to the structural change [20]. Furthermore, the overall photocatalyst presents a similar characteristic of anatase structured TiO 2 with the Raman peak. The peaks at 144, 398, 515, and 633 cm − 1 can be assigned as E g1 , B 1g , A 1g , and E g2 , respectively which were presented by the external vibration of the anatase structure. These peaks are also applied as the indicator of anatase phase formation in the photocatalyst [19]. Moreover, according to the supportive Raman result in Table 1, the value of I D /I G increases when rGO content increases. Table 1 also presents the porous structure of synthesized photocatalyst using BET analysis. The pore sizes of all photocatalysts are in the range of 6-7 nm, within the range of the mesoporous (2-50 nm). In addition, all photocatalysts can be attributed to the type IV curve and are tended to hysteresis loops [21]. Compared to bare TiO 2 , the BET surface area of the different weight percent of rGO/S 0.05 N 0.1 TiO 2 samples significantly decreases with increasing rGO content start from the smallest introduction of rGO content (0.01 wt%). Moreover, reduction of surface area is detected from 162 to 143 m 2 g − 1 with the addition of 0.1 to 1.0 rGO wt%. However, compared to the other rGO addition, low surface area of the 0.01wt%rGO/ S 0.05 N 0.1 TiO 2 is detected. Thus, the dominancy of TiO 2 component affecting surface area and porosity is indicated by the BET results [22]. The surface properties of synthesized photocatalyst were analyzed using SEM and TEM as displayed in Fig. 2. TiO 2 composites are dispersed on the surface of rGO and some TiO 2 composites enter into the interlayers of rGO (supported in Fig. S1), this structure supports the efficient electron collection through rGO sheets during the synthesis process [23]. Moreover, the calculation results from Scherrer's equation is in accordance with the quasispherical shape-like morphology in TiO 2 and S doped TiO 2 nanoparticles with an average size of 10-15 nm [24]. Increasing rGO content in the photocatalyst forms a larger aggregate. XPS instrument was applied to observe the chemical and electronic states of the elements and chemical bonding on the surface of photocatalysts as shown in Fig. 3. The spin-orbital splitting photoelectrons in the Ti 4+ valence state caused the peaks centered at 458.4 and 464.0 eV corresponding to Ti 2p 3/2 and Ti 2p 1/2 , respectively in the Ti 2p XPS spectra for TiO 2 . Furthermore, the XPS core level analysis of C 1 s confirmed the Ti-C bond [25]. In the C 1 s spectra, the bonds of C=C, C-C, C-O and -COOH were identified with the signals at 284.5, 285.1, 286 and 288.6 eV, respectively [26]. The O 1 s corelevel peaks can be observed at 529.7 eV (Ti-O-Ti/ Ti-O-C), 530.2 eV (C=O), 531.5 eV (C-O) and 532.7 eV (O-H), while the carbon materials has oxygen-containing species at 531.6 eV (−C(=O)-). The Ti-S and C-S bonds are indicated with peaks at 168.3 and 169.3 eV in the S 2p spectra. Moreover, the doping of nitrogen atoms in the anatase lattice which leads to the replacement of a small portion of oxygen atoms through solvothermal process was indicated by the peak at 400.2 eV in the N 1 s spectra. Hence, the presence of co-doped N and S on the lattices of rGO/TiO 2 photocatalysts is clearly confirmed by the XPS results [27]. The TG/DTA curves for the synthesized photocatalysts was presented in Fig. 4. In the range of 180 to 380 °C, about 9.0% of weight is lost from TiO 2 due to the organic compound decomposition. The conversion of amorphous precursor into the anatase phase occurred as the temperature increased from 425 to 500 °C. No observed weight loss above 500 °C along with the disappearance of TGA, DTA curves with the higher temperature was observed and indicated the initial formation of oxide and the crystal change. The weight loss in the sample of S 0.05 N 0.1 TiO 2 occurs at a temperature range of 50 to 140 °C due to the vaporization of adsorbed/absorbed H 2 O and organics. The removal of strong bonding of water of surface hydroxyl groups causes the 10% weight loss in the range of 150-450 °C in the second region. Moreover, the mass loss due to the S element oxidation was detected in the temperature of 450-800 °C and followed by the stable value of the remaining weight [28]. However, the photocatalyst with rGO has a higher weight loss than that without rGO in temperature ranges of 150-250 and 600-730 °C. The decomposition of remaining organic compounds formed during the synthesis and partial oxygen-containing functional groups in the rGO causes the weight loss, between 200 and 450 °C. Furthermore, the oxidation of carbon scaffold of the rGO was described by the weight loss in the range of 450-650 °C [29]. Table 2 for the direct and indirect band-gap. Compared to bare TiO 2 , a noticeable shift of absorption edge was demonstrated by the S, N-doped TiO 2 samples [15]. The active photocatalytic activity of rGO/S 0.05 N 0.1 TiO 2 composites might occur under visible irradiation with the incident wavelengths in the range 200-800 nm [30]. Estimation of the band gap energies was calculated by reflectance spectra conversion to absorption Kubelka-Munk units as shown in Table 2 and reveal the reduction of energy gap under the higher rGO content. This phenomenon occurred due to the Ti-O-C bonds of rGO/TiO 2 nanocomposites between TiO 2 nanoparticles and rGO nanosheets [15]. Visible light-driven photocatalytic degradation of toluene Toluene in the initial concentration of 2 ppm was used as the pollutant to be degraded by various synthesized photocatalyst under the RH of 60% at the temperature of 25 °C. Figure 6 and Table 3 present the toluene conversion and reaction rate during photocatalytic degradation. The results indicate that 0.1wt%rGO/S 0.05 N 0.1 TiO 2 has the best photocatalytic activity among all rGO doping ratios. It is found that co-doped nitrogen and sulfur (N, S) would increase the conversion due to an increase in the absorption of visible light [31]. The presence of additional rGO content promotes the creation of •OH radicals and the interference in •O 2 − radicals which further lead to the improvement of photocatalytic activity [31]. However, adding excessive or less rGO may increase collision opportunities between electrons and holes which causes faster electron-hole pairs recombination [32]. According to the results, 0.1wt%rGO/S 0.05 N 0.1 TiO 2 will be further implemented for the study of influencing parameters and kinetic analysis. Photocatalytic activity under various environmental conditions Photocatalyst with the composition of 0.1wt%rGO/ S 0.05 N 0.1 TiO 2 was chosen for the parameters test under various initial toluene concentrations (1-4 ppm), temperature (25-45 °C), RH (0-80%), and retention time (5-30 s). The results of toluene conversion under various parameters are displayed in Fig. 7 along with the data in Table 3. The toluene degradation efficiency was reduced under the supply of higher initial pollutant concentration as seen in Fig. 7a and b. The relationship between pollutant concentration and reaction rate may follow the Langmuir-Hinshelwood model [33], and considering principles of catalytic reactions, at low pollutant concentration, the reaction rate increases with pollutant concentration until it reaches a region where the reaction rate becomes independent of concentration. However, the deposition of refractory reaction intermediates on photocatalyst surface, the loss of active sites, and the dramatic reduction of reaction rate were triggered by the application of high pollutant concentration [34]. The improvement of reaction rate occurred at higher VOC concentration, but the reduction of removal efficiency and mineralization also appeared [34]. Moreover, higher RH value reduced the photocatalytic activity for toluene degradation. The water vapor content in the gaseous effluent created competitive adsorption with toluene molecules on the photocatalyst active sites [35]. However, compared to 0% RH, the higher conversion was detected under the condition of 1% RH which indicated the induction of photocatalytic activity by water vapor due to the hydroxyl radical formation [34]. Under the higher temperature, toluene photocatalytic degradation was also increased due to the exothermic and equilibrium reaction in elementary steps with enhanced the overall reaction rate [36]. Besides the influence on reaction kinetics, higher operating temperature also affects the adsorption of the gas-phase compounds and lowers the amount of adsorbed pollutants on the surface. Thus, the mass transfer might limit the process and cause the lower reaction rate [33]. According to previous study, the optimum temperature was found in the range of 40-50 °C. Under low temperature, products desorption will occur due to the slower reaction than the degradation on the surface or the adsorption of reactants. Nevertheless, the higher temperature also becomes a limitation in the toluene adsorption process on the photocatalyst surface [37]. Figure 7g and h display the conversion and reaction rate of the photocatalyst under various retention times. These results indicate the important role of mass transfer and the limitation on oxidation rate due to the direct effect of flow rates on the retention time. The reduction of pollutant's photocatalytic decomposition occurred under higher flow rates and the shorter retention time [38]. The results show that the conversion of toluene increases with an increase in retention time. Reduction of VOC molecules residence time due to the higher airflow rate ignites the lower adsorption and conversion of the pollutant [34]. However, the reaction rate decreases with an increase in retention time. Therefore, the reduction of residence time raises the importance of adsorption capacity during the reaction [39]. Photocatalytic kinetics analysis In this study, L-H models 1-7 (Table 4) were used to simulate data generated from the kinetic experimental set along with the simulation results after the fitting process with polymath software. The simulation results of model 4 are best suited to this study. Under the implementation of higher temperature, the elevation of rate constant k of model 4 and reduction of adsorption constant K w were detected. This phenomenon might occur due to the induction of higher temperature on the species desorption from the surface of photocatalyst. Therefore, the final apparent reaction rate was impacted by the relationship of photocatalytic degradation to both reaction and adsorption constant [40]. Moreover, when the value of K w is higher, the pollutant of toluene is more competitive with water. Table 5 presents the reaction rate constant and adsorption equilibrium constant from the calculation using Arrhenius equation using the values from model 4. The value of reaction rate constant (k') in this study is 3.15 × 10 − 7 mol cm − 3 s − 1 . Furthermore, the adsorption equilibrium constant for toluene (K' A ) and water (K' W ) are 7.46 × 10 7 and 3.52 × 10 6 K 0.5 cm 3 mol − 1 , respectively. The dependency of photocatalytic degradation rate on the temperature was represented by the activation energy value (10.3 kJ mol − 1 ) and created the possibility of surface adsorption−desorption phenomena [41]. Moreover, in this study, the enthalpy value of physisorbed toluene and water are − 5.3 and − 4.5 kJ mol − 1 , respectively. These results were supported by the 3D surface mesh diagram as shown in Fig. 8 which presented the well fitted data with the model 4. Mechanism of toluene photocatalytic degradation Generated byproduct analysis during the photocatalytic degradation of toluene under 0 and 60% RH using 0.1wt%rGO/S 0.05 N 0.1 TiO 2 for 8 h is shown in Fig. 9 along with the mineralization efficiency curve. The C-H stretching vibration of aromatic ring created the bands at 3076 and 3037 cm − 1 . On the other hand, the presence of symmetric and asymmetric C-H stretching vibrations of methyl groups form the bands at 2937 and 2881 cm − 1 , respectively. The bands at the range of 1000-1260 cm − 1 are corresponding to C-O stretching vibration. In addition, the vibration of aromatic ring is associated with bands at 1609 and 1496 cm − 1 [42]. Upon irradiation, the two bands at 2360 and 2338 cm − 1 corresponding to CO 2 increase obviously. However, some intermediate products also form in the progress of photocatalytic reaction under visible-light irradiation. The stretching vibration of the aldehydes forms the bands at 1685 and 1671 cm − 1 which also indicates the generation of benzaldehyde [43]. Furthermore, the bands located at 1541 and 1508 cm − 1 were formed by the stretching vibrational (C=O) of carbonyl compounds in benzaldehyde. For the benzoic acid case, C=C stretching vibration was related with the peaks centered at 1653 and 1636 cm − 1 while peaks at 1558 and 1521 cm − 1 were the indicators of carboxylate group COO-asymmetric stretching vibration modes. Moreover, the characteristic peaks of benzyl alcohol were indicated by the bands at 1473 and 1457 cm − 1 [31]. At 0% RH, C-H group, CO 2 , and C-O bindings were detected, however, these peaks cannot be found. It is suggested that toluene may not be converted completely to CO 2 at high RH, and the toluene conversion decreases with an increase of RH. The presence of water content in higher humidity conditions creates competition in the adsorption process toluene on the photocatalyst surface. Therefore, the partially oxidized of toluene species (such as benzaldehyde and benzoic acid) were widely observed under higher humidity conditions and remained on the photocatalyst surface. As reported by Li et al. [44] the generated intermediate products are capable to build complexation with photocatalyst surface and may lead to the deactivation. However, under the dry condition, the early conversion of toluene to CO 2 was detected and the deactivation of photocatalyst was prevented. For the analysis of mineralization ratio, the experiments were carried out at the retention time of 30 s, the inlet concentration of toluene 1 ppm, RH 60%, and ambient temperature 25 °C. It can be found that the conversion and mineralization efficiency gradually increase with decomposition time until a steady-state occurs, which indicates that toluene is oxidized into CO 2 and Table 4 The calculation results for Langmuir-Hinshelwood models 1-7 Note: k = reaction rate constant (mol cm − 3 s − 1 ); K A = toluene adsorption equilibrium constants (cm 3 mol − 1 ); K W = water vapor adsorption equilibrium constants (cm 3 mol − 1 ); r = reaction rate (mol cm − 3 s − 1 ); C A = toluene concentration (mol cm − 3 ); C W = water concentration (mol cm − 3 ) Model Reaction [45]. Nevertheless, the FTIR instrument could not detect the low concentration of CO 2 functional groups due to the incomplete conversion, therefore the detection of CO 2 content was measured by the CO 2 molecular analyzer. According to the result of FTIR and mineralization analyses along with the previous study of Sleiman et al. [45], the predicted photodegradation mechanism is illustrated in Fig. 10. The reaction between O 2 or H 2 O with the electron and hole generates reactive oxygen species (•OH and •O 2 − ) which take part in the photocatalytic reaction. Thus, the photocatalytic oxidation of toluene to benzaldehyde at the beginning of reaction. This result is also supported by the FTIR spectra that display the stretching vibrational (C=O) of carbonyl compounds in benzaldehyde. Under longer irradiation time, further conversion of the benzaldehyde into benzoic acid was achieved and proved by the benzoic acid asymmetric stretching vibration modes of the carboxylate group COO − in the FTIR result. Final conversion of toluene into CO 2 and H 2 O reduced its toxicity [46]. Based on the FTIR analysis of generated byproducts (Fig. 9), RH controls two competitive reaction pathways which is related to the different active species that affect the adsorption mode of toluene on photocatalyst surface. An electron transfer process from toluene to TiO 2 initiates the formation of a benzyl radical in the absence of water vapor. The generated benzyl-peroxyl radical from the reaction of benzyl radical with O 2 will be further thermally disintegrated on the surface. Furthermore, the formation of an aromatic bridged peroxo intermediate was related to the reaction of aromatic radical cation with O 2 . Further conversion of benzaldehyde to benzoic acid and followed by the decomposition on the TiO 2 surface elevates the benzene and CO 2 content. The reaction will be completed by a sequence of oxidation reactions by holes, oxygen, and ‧OH radicals at a lesser extent which will lead to the final product (CO 2 ). Moreover, the accumulation of generated water content also involved in the possibility of competitive adsorption with the contaminant molecules and further reduced the photocatalytic activity performance [47]. Conclusions The modification on TiO 2 photocatalyst was carried out by the addition of rGO and non-metal elements (S, and N). The particle TiO 2 attached to the rGO surface and interposed between the layers which promoted the elevation of specific surface area. Photocatalytic activity improvement was achieved by the formation of chemical defects and bonding which introduces the oxygen-containing
v3-fos-license
2022-07-28T06:18:21.116Z
2022-07-26T00:00:00.000
251101986
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": null, "oa_url": null, "pdf_hash": "632ff367e0ee1d1bbd683e694d992c47f77df6ec", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:74", "s2fieldsofstudy": [ "Medicine" ], "sha1": "642eda39a4246beec3e51c1e0ce9a985cf68ed72", "year": 2022 }
pes2o/s2orc
Prediction of Compartment Syndrome after Protobothrops mucrosquamatus Snakebite by Diastolic Retrograde Arterial Flow: A Case Report Post-snakebite compartment syndrome (PSCS) is an uncommon but dangerous condition. Compartment syndrome-like symptoms after snakebite by Protobothrops mucrosquamatus (P. mucrosquamatus) are not effective in guiding fasciotomy. Objective evaluation of intracompartmental pressure measurements in patients with suspected PSCS is recommended. However, there is a lack of consensus regarding PSCS and indications for surgical intervention, including the threshold value of chamber pressure. In addition, intracompartmental pressure measurements may not be readily available in all emergency service settings. Measuring intracompartmental pressure in all snakebite patients for early diagnosis of PSCS is impractical. Therefore, identifying risk factors, continuous real-time monitoring tools, and predictive factors for PSCS are important. Sonography has proved useful in identifying the location and extension of edema after a snakebite. In this study, we attempted to use point-of-care ultrasound to manage PSCS in real-time. Here, we describe a rare case of snakebite from P. mucrosquamatus. PSCS was considered as diastolic retrograde arterial flow (DRAF) was noted in the affected limb with a cobblestone-like appearance in the subcutaneous area, indicating that the target artery was compressed. The DRAF sign requires physicians to aggressively administer antivenom to salvage the limb. The patient was administered 31 vials of P. mucrosquamatus antivenom, and fasciotomy was not performed. DRAF is an early sign of the prediction of PSCS. Introduction Snakebite venom may cause both systemic and locoregional effects. The use of antivenom can effectively control the systemic effects of venom and prevent fatal complications. However, locoregional effects after snakebites may occur more frequently than systemic events. Tissue necrosis, edema, blistering, and bruising are commonly present in the development of compartment syndrome, which is rare but may cause permanent physical deformities owing to residual sequelae after a snakebite. Post-snakebite compartment syndrome (PSCS) is an emerging condition characterized by an increase in intracompartmental pressure leading to subsequent neurovascular compromise and tissue necrosis. Typical signs and symptoms of PSCS are "6P", including pain, paresthesia, pallor, paralysis, poikilothermia, and pulselessness. When PSCS is diagnosed, an early fasciotomy is necessary to prevent permanent muscle and nerve damage. However, PSCS is often hard to predict and diagnose after envenomation that may occur several hours or even days after a snakebite. Therefore, an effective and easy tool for detection and timely intervention may improve clinical outcomes. Herein, we present a case of P. mucrosquamatus bite, and try the use of a point-of-care ultrasound to facilitate clinical decisions for PSCS. Case Presentation A 61-year-old male with an unremarkable medical history was admitted to the emergency department with a snakebite injury on his left wrist one hour after the snakebite. The patient confirmed that the snake was Protobothrops mucrosquamatus (P. mucrosquamatus). Physical examination revealed a temperature of 36.2 • C, blood pressure of 225/100 mmHg, heart rate of 94 beats/min, and Glasgow Coma Scale score of E4V5M6. His skin had two puncture wounds and two laceration wounds on the left palm with severe swelling, local erythema, and ecchymosis. There was no numbness, weakness, slurred speech, or dyspnea. Detailed laboratory analyses are shown in Table 1. Only D-dimer levels were elevated. No significant coagulopathy or thrombocytopenia was noted. Four vials of P. mucrosquamatus antivenom were administered in a timely manner in the emergency department to prevent compartment syndrome. After one hour, a tiny vesicle formed at the left wrist and progressed to multiple bullae ( Figure 1). Although the capillary refilling time of the distal fingers was less than two seconds, sonographic vascular peripheral arterial assessment of the left distal radial artery showed diastolic retrograde arterial flow (DRAF), reflecting the compression of the target artery ( Figure 2). The results of peripheral vascular Doppler ultrasound were incompatible with capillary refilling time. However, a high risk for compartment syndrome was suspected. Therefore, P. mucrosquamatus antivenom was aggressively administered every two hours and the patient was admitted for close observation. Multiple bullae were aspirated to eliminate the venom depots. In the ward, distal radial arterial Doppler pulse and capillary refilling time were regularly checked to detect compartment syndrome. The left-hand swelling and ecchymosis condition improved with antivenom administration (Figure 3). A follow-up laboratory analysis revealed significantly elevated D-dimer levels. Finally, he was administered 31 vials of P. mucrosquamatus antivenom. Discussion PSCS is an uncommon but dangerous condition. The incidence ra vention after Crotalinae snakebites is high [1]. Crotaline snakebites, i rops mucrosquamatus and Trimeresurus stejnegeri, are common venomo account for the majority of envenoming events in Taiwan. Envenom snakebites leads to severe tissue edema and pain in the affected extr compartment syndrome [2,3]. In a study [3], 54.6% (53/97 patients) pati for observation during the acute period. Of these patients, 75.4% (40 followed up for 48-72 h and 18.8% (10/53 patients) were followed up compartment syndrome-like symptoms. Only three patients finally re All of these patients responded to 20% mannitol and antivenom therap symptoms and decreased compartment pressure. A descriptive stud tients in northern Taiwan from 2009 to 2016 showed 50% (63/125) pati P. mucrosquamatus, and the surgical rate was high, up to 23.8% [1]. There are several reasons to explain why compartment syndro commonly occur in P. mucrosquamatus snakebites. First, extensive swe ecchymosis are common after snakebite by P. mucrosquamatus, even w apy, and mimic compartment syndrome. Second, snake venom with and metalloproteinase may directly destroy soft tissue by breaking d and type IV collagen in capillaries. Compartment syndrome-like sym bite by P. mucrosquamatus are directly caused by snake venom witho Discussion PSCS is an uncommon but dangerous condition. The incidence rate of surgical intervention after Crotalinae snakebites is high [1]. Crotaline snakebites, including Protobothrops mucrosquamatus and Trimeresurus stejnegeri, are common venomous snakebites that account for the majority of envenoming events in Taiwan. Envenoming by Crotalinae snakebites leads to severe tissue edema and pain in the affected extremities, mimicking compartment syndrome [2,3]. In a study [3], 54.6% (53/97 patients) patients were admitted for observation during the acute period. Of these patients, 75.4% (40/53 patients) were followed up for 48-72 h and 18.8% (10/53 patients) were followed up for 96-120 h due to compartment syndrome-like symptoms. Only three patients finally received fasciotomy. All of these patients responded to 20% mannitol and antivenom therapy that relieved the symptoms and decreased compartment pressure. A descriptive study of snakebite patients in northern Taiwan from 2009 to 2016 showed 50% (63/125) patients were bitten by P. mucrosquamatus, and the surgical rate was high, up to 23.8% [1]. There are several reasons to explain why compartment syndrome-like symptoms commonly occur in P. mucrosquamatus snakebites. First, extensive swelling and persistent ecchymosis are common after snakebite by P. mucrosquamatus, even with antivenom therapy, and mimic compartment syndrome. Second, snake venom with phospholipase A2 and metalloproteinase may directly destroy soft tissue by breaking down muscle fibers and type IV collagen in capillaries. Compartment syndrome-like symptoms after snakebite by P. mucrosquamatus are directly caused by snake venom without increasing compartment pressure [4][5][6]. Finally, the host response to snake venom may contribute to local tissue damage, including the formation of neutrophil extracellular traps [7]. Therefore, compartment syndrome-like symptoms after snakebites caused by P. mucrosquamatus are not effective in guiding fasciotomy. The ischemic symptoms and signs of PSCS usually occur in the late period and should not be relied on for the early diagnosis of PSCS. Therefore, to identify risk factors, continuous real-time monitoring tools and predictive factors are important. A classification of snakebite wounds was proposed to determine PSCS in the absence of compartment pressure measurement devices [3]. In groups 0 and 1, there was only a bite trace and local pain less than 2 cm in the extremity diameter. In group 2, there were mild systemic symptoms and 2-4 cm of the extremity diameter. Patients in group 3 had more than 4 cm of extremity diameter and were exhibited edematous cold and pulselessness. Patients with higher group snakebite wounds, especially in group 3, should be monitored for a minimum of 24 h and should be monitored for PSCS if necessary using the intracompartmental pressure monitoring system. Laboratory analysis was also an effective tool to predict PSCS [8,9]. In a study [9], 6.6% (9/136) patients developed PSCS and the authors found a significant increase in the white blood cell (WBC) count, segment form, aspartate aminotransferase level, and alanine aminotransferase level in the PSCS group. In multivariate analysis, the level of WBC count (Cut-off value: 11650/µL with sensitivity of 66.7% and specificity of 83.6%) and AST (Cut-off value: 33.5 U/L with sensitivity of 85.7% and specificity of 78.9%) were risk factors for PSCS. These markers reflect inflammatory or cytokine reactions during the host response [10]. Acute hemolysis and severe necrosis of the skeletal muscles in the PSCS population increased AST levels. In symptomatic snakebite patients, elevated WBC or AST levels should be considered in the development of PSCS. Our case did not show leukocytosis or AST on day one. The elevated AST levels were observed after four days. Objective evaluation of patients with suspected PSCS, such as intracompartmental pressure measurement is recommended. However, there is a lack of consensus regarding PSCS and indications for surgical intervention, including the threshold value of chamber pressure [11,12]. Some studies advocate close follow-up and fasciotomy if there is clinical suspicion. In a population prone to trauma-induced compartment syndrome, early fasciotomy is the most efficacious. Lately, fasciotomy has resulted in similar rates of limb salvage but a higher risk of infection [11]. Fırat et al. [13] suggested that early fasciotomy in the PSCS population also resulted in better recovery than late fasciotomy. In early fasciotomy, local edema and toxic symptoms are rapidly diminished by enhancing the circulation of the extremities and clearing the toxins. Controversially, some studies have suggested postponing fasciotomy is necessary as fasciotomy may cause complications. Few patients require fasciotomy after snakebite due to PSCS, and earlier fasciotomy increases morbidity [14][15][16]. Most compartment syndrome-like symptoms regress with adequate antivenom therapy. If intracompartmental pressure progressively increases to over 55 mm Hg, fasciotomy should be considered [3]. In addition, intracompartmental pressure measurements may not be readily available in all emergency service settings. Measuring intracompartmental pressure in all snakebite patients for early diagnosis of PSCS is impractical. In our case, the plastic surgeon used compartment syndrome-like symptoms to assess PSCS, and surgeons were concerned that fasciotomy may impair the function of the affected extremity. Therefore, we attempted to use an objective method via point-of-care ultrasound to manage PSCS. Several studies have highlighted the benefit of sonography in identifying the location and extension of edema after snakebites [2,17,18]. Wood et al. [17] measured the dimensions of the subcutaneous tissues and the deep muscle compartment at the affected limb compared with the unaffected limb to calculate an expansion coefficient for early detection of PSCS. However, it is difficult to establish the absolute criteria for the diagnosis of PSCS based on morphometric analysis [19]. Ho et al. [2] proposed a theory that increased intracompartmental pressure in the PSCS population compromised blood circulation, leading to tissue hypoxia and nerve injury. When the development of PSCS influences vascular compliance and resistance, diastolic retrograde arterial flow (DRAF) can be observed in the affected artery, reflecting a restriction of the compartment space [20,21]. This phenomenon has been demonstrated by Mc Loughlin et al. [21]. The authors found that greater percentages of DRAF were detected in healthy volunteers under a pressure cuff with 40 mm Hg, equal to the diastolic blood pressure and mean arterial pressure. The appearance of DRAF seems to be an ideal tool for the serial evaluation of PSCS development. However, Ho et al. [2] analyzed 17 snakebite patients bitten with P. mucrosquamatus and sonography showed subcutaneous edema in all patients, but DRAF was not detected in any of them. In our patient, diastolic retrograde arterial flow (DRAF) was noted in the affected limb with a cobblestone-like appearance in the subcutaneous area, indicating that the target artery was compressed. Early signs of PSCS were detected using POCUS, which made emergency physicians change their therapeutic strategies. The venom of P. mucrosquamatus includes 61 distinct proteins belonging to 19 families, including snake venom metalloproteinase (SVMP; 29.4%), C-type lectin (CLEC; 21.1%), snake venom serine protease (SVSP; 17.6%), and phospholipase A2 (PLA2; 15.9%) [22]. The key treatment for PSCS from P. mucrosquamatus bites is the administration of horsederived antivenom, produced by the Centers for Disease Control, R. O. C. (Taiwan). The antivenom of P. mucrosquamatus is a bivalent F(ab)2 fragment antivenom with a neutralization effect of over 1000 Tanaka units per vial [23][24][25]. Based on the current consensus on P. mucrosquamatus bite management, approximately 1-4 vials of antivenom are recommended for patients with P. mucrosquamatus snakebites [25][26][27][28]. In our case, four vials of antivenom were administered due to the progression of compartment-like symptoms 8 h after the snakebite. However, POCUS showed DRAF, suggesting early PSCS. The antivenom of P. mucrosquamatus was administered every two hours and the interval was increased to four hours if PSCS improved. The patient was administered 31 vials of P. mucrosquamatus antivenom for limb salvage. Although fasciotomy was not performed, we believed that DRAF was an early sign to reflect compression of the target artery. DRAF was detected even with local edema in the subcutaneous area. Severe subcutaneous edema may also cause DRAF if the snakebite wound is in the distal limb. However, DRAF is an early sign reflecting targeted arterial compression but is not specific for PSCS. Partial compression of the targeted artery may also cause DRAF, as demonstrated by McLoughlin et al. [21]. DRAF is an early and effective sign to alert emergency physicians to consider PSCS in patients with snakebites of P. mucrosquamatus. Conclusions Herein, we present a case of P. mucrosquamatus bite. Snakebite and PSCS were considered due to DRAF and the cobblestone-like appearance at the affected limb, reflecting the compression of the target artery. The appearance of DRAF requires physicians to aggressively administer antivenom for limb salvage. The patient was administered a total of 31 vials of P. mucrosquamatus antivenom, and a fasiotomy was not performed. We believe that DRAF is an early sign of PSCS. Conflicts of Interest: The authors declare no conflict of interest.
v3-fos-license
2020-06-11T09:11:06.905Z
2020-05-01T00:00:00.000
219880407
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://iiste.org/Journals/index.php/JEDS/article/download/52780/54535", "pdf_hash": "1c28943db54fc51e8263b4c29b2dc1ace9390fec", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:75", "s2fieldsofstudy": [ "Economics" ], "sha1": "7300ca66f569828af44e360faffe71fa1e613903", "year": 2020 }
pes2o/s2orc
External Reserves and Selected Key Macroeconomic Variables in Nigeria: An Empirical Analysis (2000-2018) 1 The paper determines empirically the interactive influence of external reserves and selected key macroeconomic variables in Nigeria using an autoregressive distributive lag (ARDL) model, cointegration and error correction model anticipated by Pesaran, Shin and Smith (2001) with quarterly data between 2000 and 2018 sourced from Central Bank statistics portal on data warehouse pro platform at https://cbnstatistics.datawarehousepro.com. The paper applied the Augmented Dickey-Fuller (ADF) unit root in testing variables stationarity. The Cumulative sum (CUSUM) as well as the Cumulative sum of square (CUSUMSQ) display some recursive outstanding schemes of the external reserve function that remain within the 5% critical positions, and therefore gave an indication of steady external reserve purpose for Nigeria during the study period. The key variables trade openness that captured the total imports and exports by way of proportion of gross domestic product (GDP), exchange rate, direct investment, portfolio investment, oil price, consumer price index, interest rates have correct signs and the ARDL regression analysis indicates that the descriptive variables elucidate and accounted for 99% disparities in external reserves model. The bounds cointegration test exhibited that the variables are cointegrated. The paper demonstrated several empirical supports for the theoretical implications. Precisely, the log of direct investment, portfolio investment, trade openness and interest rate have positive effect, statistically significant and contributes to the external reserves position in Nigeria on the short- run. noted that developing countries accumulate reserves to allow Central banks intercede in souks to regulate rates of consumer price index and exchange, to enable the country borrow from developed economies as well as guide against uncertainties of external investment flows; economic and social costs which could emanate from insignificant earnings on capital, impairment as a result of foreign exchange devaluation and foreign earnings from capital as well as public disbursements which might be funded using reserves. Reserve accumulation help in stabilizing the economy, this is when the central banks mediate in markets to impact the exchange and inflation rates. The Central Bank of Nigeria (CBN) holds reserves to deal with exchange rate volatility, as a shockwave stabilizer in times of financial tremors, settlement of international trade, and means of holding Sovereign Wealth Fund (SWF). These reasons for holding reserves gives rise to the question of this study; "does accumulation of reserve or its depletion have impact on the economy of Nigeria?". Olokoyo et al (2009) scrutinized collaborative influence of reserve on some macroeconomic variables between the period 1970 and 2007 in Nigeria and concluded that accumulation of reserve is not very productive in Nigeria. From the works by some scholars on the investigation of External reserves and some macroeconomic variables in Nigeria and the international community in general, it was found that there have been both positive and negative impacts of accumulation of external reserves on the nation's economy. Hence, the significance of this research is to add to the existing knowledge and offer recommendations to the government on how to ease the risk and negative effects of accumulating reserves in the country. The aim of this research is an extension of the work of Olokoyo et. al (2009) by examining the movements and collaboration between international foreign exchange in Nigeria and relationship of reserves on various macroeconomic variables like GDP, foreign capital flow, trade openness, oil price and exchange, inflation, and interest rates during the period 2000 Q1 and 2018 Q4 using the ARDL. The inclusion of key macroeconomic variables and the time period considered is quite unique. The paper is arranged in five parts viz first part is the introduction, part two handles literature review; part three treats research methodology and part four presents data, results and discussion. Part five is the concluding remarks of the paper. Literature Review 2.1 Theoretical Fiction There have been several discussions and opinions on external reserve by some scholars; the reason for reserve accumulation, adequacy, determinants, etc. In assessing the adequacy of reserve, IMF (2011) enlightened on the significance of possessing suitable reserve as fragment of a nation's defense in contradiction of shocks. Even though thorough inclusive strategy frameworks were perhaps the most significant purpose, liquidity cushions help flat consumption throughout the crises, and empowered some nations to accomplish huge diminutions without undergoing an expensive disaster. Frenkel and Jovanovic (1981) in their work viewed reserves as a buffer stock to permit stochastic variations in outward transactions. Since modification costs could be sustained each time reserves attain a lower bound, it will be ideal to maintain a level of reserves that can hold the unpredictability of external dealings and evade such modification. In a way, foreign reserves indicate the strength of a nation, Greenspan (1999) argued that economies with feebler currencies ostensibly grasp international reserves because they notice the insurance worth of such reserves could be equivalent of their cost in actual possessions. According to him, reserves, similar to all other monetary asset, have worth but include cost. Consequently, the alternative to build reserves, and in what amounts, is continuously a tough cost-benefit compromise. Generally, reserves are held for exchange rate targeting, credit worthiness, foreign exchange market stability, dealings cushioning and disaster such as natural tragedy (Archer andHalliday, 1998 andHumphries, 1990). However, there are risks involved in the accumulation of reserve. When the reserves are held in excess, instead of preventing economic disaster, they might weaken the global economic scheme in the elongated term. Also, they might increase the accumulation of international disparity in countries with current account surplus (Steiner, 2010). According to Fischer (2001), "Reserves matter because they are the key determinant of a country's ability to avoid economic and financial crisis. This is true of all countries, but especially the emerging markets that are open to volatile international capital flows. The availability of capital flow to offset current account shocks reduce the amount of reserves a country need. But access to private capital is often uncertain, and inflows are subject to rapid reversals. As was seen in the financial crisis of the late 1990s and the recent global financial crisis, countries with robust foreign reserves, by and large, did better in withstanding the contagion than those with smaller foreign reserves". IMF (2003) established real per capita GDP, proportion of importations to GDP, instability of the exchange rate, population level as possible factors that determines external reserves holding and at the same time are stochastically connected with reserve whereas opportunity cost and capital account susceptibility were not. According to Heller and Khan (1978), reserve accumulation can be influenced by the floating exchange rate regime. They identified it as a perilous feature that impacts the level of reserves adequacy. In their submission, they observed that for industrialized nations their reserves adequacy moves downwards, although most unindustrialized nations the level tends to increase. They posit that more elastic exchange rate managements can accept shockwaves to the economy, and therefore need negligible liquidity buffers. Nevertheless, the necessity for reserves may rise with exchange rate pliability to temper exchange rate movements if investments are unpredictable. Conversely, there have been controversies between the monetarist and Keynesian on the accumulation of reserve, and based on this there are some theories applied. According to the monetarist, accumulation of reserve is because of the surplus claim for the local currency and the development of the international trade. In respect to Keynesian, accretion of external reserve will increase the current account hence it will absolutely influence cumulative input in the short term; this could disrupt the nominal exchange rate. Additional concept is the adaptability method that observes outcome of an appreciation or devaluation of the exchange rate on reserve movements in an economy. The method stipulates in a descending modification of exchange rates, a country undergoing a balance of payment imbalance must increase exports and decrease imports and consequently accrue additional reserves. In some unindustrialized nations of Africa and East Asia, there has been massive accumulation of external reserve regardless of economic implication on macro economy (Umeora, 2013). Reserves accretion can carry a huge opportunity cost for African nations and indicate economical sponsoring of the shortfalls for reserve capital nations. The accumulation is accompanied by appreciation of national currencies which in a way destabilizes export competitiveness and hinders energies meant at diversification of exports. Reserves can distort development through the exports and asset networks. It can negatively affect local asset and employment generation and limit growth towards attaining national expansion objectives. (Elhiraika andNdikumana, 2007, Kevin G. et al 2009). Elhiraika and Ndikumana, (2007) in their contribution, examined the motivations, foundations, and influence of the reserves accretion, with concentration on the effect of main macroeconomic variables such as public and private asset, the exchange rate, and inflation. Using Kao and Pedroni (1996) co-integration tests that notify requirement of the models in order to circumvent false regressions, they came up with the result that reserves speculation could not vindicated the revenues to investments contemplations, assumed the low international interest rates and the high rates of earnings to local investments in sub-sahara African nations. A main outcome of the experimental scrutiny is reserves accretion had been attended by increase in the value of domestic moneys. However, their evidence suggested that central banks were efficacious in encompassing the process of reducing taxes and or increasing government expenditures impacts of international reserves inflows, especially by curtailing it's influence on money supply. The approach emanates at a price as it avoids nations benefit from exports growth and international assets influxes to arouse local asset. Empirical Literature Usman and Ibrahim (2010) utilize a blend of Ordinary Least Squares (OLS) and Vector Error Correction (VEC) approaches to examine the influence of variation in foreign exchange positions of Nigeria scheduled local asset, inflation and exchange rates, it was noticed that variation in foreign exchange impacts FDI and exchange rates, but had no impact on local asset and consumer price index. The outcomes recommended wider reserve administration approaches could take full advantage of the improvements in crude oil export returns using more capitals for domestic asset enhancement. In some Asian countries, as researched by Lin and Wang (2005), using the method originated by Kyaland and Prescott (1977), the relationship between the external reserve and the inflation showed that when the external reserves increases, the consumer price index (CPI) will be increasing even though exchange rate impact was robust. However, CPI will decrease once the economic shock impact could be prevailing while the heaviness attached on yield steadiness is minimal. Marc-Andre Gosselin & Nicholas Parent (2005) by means of OLS technique noted that reserve accretion by monetary authorities in developing Asia estimated reserve-demand technique using eight panels of Asian developing-souk economies and came up with an opinion that since past relationships remain to be true, a slowpace stride of reserve accretion is possible. Their outcome suggests undesirable risks for the USD. Though, the considerable asset impairments that Asian monetary authorities would sustain if they were to radically alter their holding strategy alleviate the risks of a speedy depreciation of the USD activated by Asian monetary authorities. Umeora (2013) examined the relationship between External Reserves Accumulation, Exchange Rate and GDP using a simple linear regression and found that there was an encouraging and substantial relationship amongst Exchange Rate, GDP and External Reserve accretion. External reserve has substantial positive impact on the measure of exchange rates of domestic currency (Naira) to USD at diversification of exports. Kruskovic and Maricic (2015) studied the pragmatic investigation of the influence of external reserves on economic development in developing economies and came up with the result that exchange rate devaluation Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.10, 2020 happens due to reserve accretion that is not provoking since it is non-tenacious shockwave different from abrupt devaluation of the exchange rate which happens due to sustaining overestimated exchange rate in the long run which could result to currency catastrophe. Accretion to external reserve will not result to inflation provided the proportion of external reserve accretion will not surpass economic development rate. Their finding suggests that the upsurge in external reserves results to the expansion of the GDP, while in the contrary direction, interconnection has not been established. Fukuda and Kon (2010) in their work explored the possible long-run impacts of the accumulation of external reserve on macroeconomic variables in emerging nations by analyzing a less complex undeveloped economy investigation wherever improved foreign exchange reserves diminish expenses of liquidity hazard. They used unbalance panel data of 135 countries as obtainable in Penn World Table. The different nations indication showed a rise in external reserves consequently increases international loan remaining as well as curtails loan maturity. The outcomes suggested augmented external reserves could result to decrease consumption and improve asset and financial development. External reserves were found to be positively interrelated with investment rate and GDP growth. The positive influence on economic development, nevertheless, vanishes when the influence is being controlled through asset i.e., with the use of asset proportion as a regressor, reserve buildup immaterial for economic development. Olokoyo et.al (2009) observed collaborative effect of international reserve on some macroeconomic variables using the ARDL and found a long-term connection amongst external reserves with some carefully chosen macroeconomic variables such as, level of trade trustworthiness, foreign investment inflow and inflation. The outcomes collected from the cointegration show minimum of two co-integrating models. Their observation established reasons that the extent of foreign exchange in Nigeria includes GDP, extent of trade openness, foreign asset influx and inflation. The extent of GDP and trade openness has positive influences on international reserves, supportive of theoretical base of external reserves. However, the extent of investment inflow and consumer price index had an adverse effect on external reserves, with a tendency to oppose accretion of reserves program of the CBN. Alabi et. al (2017) investigated the causality between foreign exchange and financial growth in Nigeria, if it exist any influence of accruing reserve on the economic development and came up with the conclusion that, accumulating reserves does affect growth positively and there was an elongated term symmetry connection amongst the variables. Their findings revealed there was un-maneuvering interconnection from real GDP to the level of external reserves accrued; this means that economic growth causes an increase in external reserve. Osuji and Ebiringa (2012) adopted a VAR model for multivariate analysis of external reserve on the macroeconomic indices to control extended term affiliation and to assess the consequence of macroeconomic volatility on the external reserve management between the years . The outcome of the VAR model revealed that external reserves is substantial in the contemporary period which appears to meet in the preceding years. Their result showed that the type, design and level of assets commodities and non-assets commodities influences external reserve management. Kevin et al (2009) employed the empirical framework of Fukuda and Kon (2008) that designed a simple open economy model where accretion to external reserve increased the cost of liquidity risk to normalize the long run effects of foreign exchange on macroeconomic indices for a collection of minor Caribbean open emerging economies. They used a balanced panel data co-integration test and the Panel Dynamic OLS (PDOLS) estimation methods to examine the presence of exclusive co-integration relationship and derive the long run estimates. From their findings, it was revealed that external reserves have a significant adverse effect on consumption and loan maturity and a encouraging influence on exports and economic development. Also, there was no substantial relationship amongst asset and reserves. Furthermore, the outcome of international reserves on economic development was revealed to hinge on the level of import cover as well as the exports and consumption on the management of the interest rates. Research Methodology This study adopts the Autoregressive Distributive Lag Model (ARDL) in examining the causal relationship amongst the extent of external reserve buildup and some macroeconomic variables, and how these macroeconomic variables impacts on the level of external reserve. The macroeconomic variables included are GDP, exchange rate, consumer price index, oil price, trade openness, level of capital inflow (direct investment and portfolio investment) and interest rate. The ARDL modeling method was chosen because it is scientifically substantial to examine the co-integration relationship in insignificant samples, which allows different optimal lags of variables. It is desirable when handling indices which are integrated of diverse order, I (0), I (1) or consolidation of all and, strong in a single elongated term affiliation amongst the fundamental variables in a small sample size. In this study, series are integrated of both I (1) and I (0), hence there is need for bound co-integration test. Gregory and Hansen Cointegration Test The Gregory and Hansen (1996) residual-based test for cointegration was used to test structural break in the cointegrating affiliation between the indicators of consideration. This method is excellent when compared with the Engle and Granger (1987) method which seems to reject the null hypothesis of no cointegration provided there is a cointegration affiliation which transformed through the sample time. The Gregory and Hansen test is known to be an abridged version of the Engle and Granger method. Model Specification and Estimation Method The comprehensive ARDL (p, q) model is stated as: Yt= + ∑ + ∑ + Where yt 1 is a vector and the variables Xt are tolerated to be entirely I(0) or I(1) or cointegrated; α and β are coefficients; is the constant; j=1,…, k; p, q are optimum lag orders; it is a vector error terms-unobservable zero means white noise vector process. For dependent variables, P lags are used while exogenous variables use q lags. Having considered advantages of ARDL modeling method in which some are; conveying reliable approximations of long run restrictions which are asymptotically standard notwithstanding the order of integration and being able to differentiate amongst dependent and explanatory variables, this article denotes equation involving external reserves to many macroeconomic variables: TRADO, EXR, DI, PI, CPI, OP, INTR) (1) Equation (1) can be written clearly as follows: (2) where, EXTR is international reserves, GDP is gross domestic product; TRADO denotes extent of trade openness designated as the summation of import and export as a proportion of GDP, EXR is exchange rate (Naira to USD), DI is direct investment, PI is portfolio investment, which makes the investment flows to the economy, OP represent the oil price, CPI is Consumer price index, INTR is the interest rate, β1-8 are parameters, and εt is the error term. The variables were reported in log format for ease of comparability. The economic concept formed the origin for the a priori expectations that βi > 0; such that i=1, 2, 3, 4, 5, 6, 7 and 8 The study intends to conduct a unit root test to test the stationarity of the sequences and if the sequences are integrated of different orders that is, I (1) and I (0), then it would be advisable to access long run affiliation by means of bounds cointegration test as anticipated by Pesaran, et. al. (2001) with following ARDL model specification (from equation 2): ΔLEXTRt= β0+ β1LEXTRt-1 + β2LGDPt-1 + β3LTRADOt-1 + β4LEXRt-1 + β5LDI + Β6LPIt-1 + β7LCPIt-1 + β8LOPt-1 Where β1-9 are long -run multipliers. It follows then that our test for the level relationship have the following null and alternative hypotheses: H0 = β1= β2 = β3 = β4 = β5 = β6 = β7 = β8 = β9 =0 (There is no co-integration) H0 ≠ β1≠ β2 ≠ β3≠ β4≠ β5≠ β6 ≠ β7 ≠ β8 ≠ β9 ≠ 0 (There exist co-integration) The importance of conducting a bounds examination is to investigate if there are long run affiliation between the variables in above equation. The verdict law is null hypothesis will be rejected provided the critical F-statistic is larger than value for the upper bound I (1) at 5% level of significance. In long and short run model is estimated when the null hypothesis is rejected because there is no cointegration. The estimation of the long run shows there is equilibrium and there is a short run ECM to gauge short run dynamic outcome. This could be recognized by means of ARDL constrained ECM such that long and short-run resistance amounts are attained. Following Pesaran and Pesaran (1997) principle of that short run dynamics are vital in examining steadiness of long-run as stated in equation (2) and estimating the following ECM thus. ΔLEXTRt= β0+ β1LEXTRt-1 + β2LGDPt-1 + β3LTRADOt-1 + β4LEXRt-1 + β5LDI + Β6LPIt-1 + β7LCPIt-1 + β8LOPt-1 Where, Δ is the first difference operator, λ is the error correction coefficient, ECT is the error correction term and other variables are as elucidated previously. The initial portion of the model shows long-run dynamics of the equation. The study estimated ARDL model using a maximum of four (4) lags (imax=4) and Akaike info criterion (AIC). The ARDL requires that, the model carries out the following diagnostic test; linearity test to ascertain whether the model is linear or it is correctly specified; the serial correlation test to ascertain the validity or otherwise of the estimates; heteroskedasticity test, if the adjustment of the error term is constant which could arise as a result of incorrect data transformation, incorrect functional form, incorrect specification of the regression model; normality Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.10, 2020 test, was employed to examine if the residuals are normally distributed, as well as the CUSUM and CUSUMQ of recursive residuals. Data Description and Source The quarterly data spanning 2000:Q1 -2018:Q4 was utilised. The data was sourced from the Central Bank of Nigeria (CBN) statistics portal on data warehouse pro platform and is also accessible at https://cbnstatistics.datawarehousepro.com. The various tests and estimation of regression were carried out with the use of E-views version 11. Estimation Techniques The cointegration and error-correction modeling technique was used for this study. In order to evaluate the cointegration and error-correction, the researchers examined the order of integration, cointegration and as well as the error correction model as discussed above. Unit-Root test In empirical research on time series data, there always exist problem of non-stationarity which makes the conventional models of econometrics like ordinary least squares unsuitable. To overwhelmed this problem of Unit-Root, the first strategy was to study stationarity of the time-series data. There are many tests that have been developed to test for stationarity, these consist of the Dickey Fuller (DF), Augmented Dickey-Fuller (ADF), Phillips-Perron (PP), Sarghan Bhangra Dubbin Watson (SBDW) tests. Of all, the ADF test is generally viewed as the most effective tool for integration. Accordingly, the ADF test used in this study. The null hypothesis here is that all variables of the model exhibited non-stationary while the alternative proposition gave evidence of stationary. If variables examined turns out to comprise unit roots it implies, they are non-stationary. Stationarity could however be achieved by first differencing of the levels provided the sequence are integrated of order one i.e. I(1). Cointegration Test This recognize circumstances where two or more non-stationary time series are unified organized in a way that they cannot depart from symmetry in the elongated period. The tests are used to classify the impact of sensitivity of two variables to the same mean price over a stated period of time. We could consider these sequences as explaining a lasting symmetry affiliation, as the modification amongst them is stationary (Hall and Henry, 1989). In a situation where there are no cointegration, it recommends that such parameters does not have long-run affiliation, they could meander randomly remotely with each other (Dickey et. al., 1991). The research used boundaries technique recognized in Pesaran et al., (2001). Error Correction Model (ECM) The ECM is used only to demonstrate the existence of cointegration; it involves creation of error correction apparatus to validate dynamic affiliation. The motive of ECM is to specify swiftness in modification from the short-run equilibrium to the long-run equilibrium state. The larger the co-efficient of the parameter, the swifter the speed of alteration of the model from the short-run to the long-run. As seen in equation (4) is the illustration of an error correction form that permits for enclosure of long-run data. Data, Results and Discussion Data was obtained from the CBN statistics portal on data warehouse pro platform and is also accessible at https://cbnstatistics.datawarehousepro.com. All the required conversions are done on the variables in order to confirm the model in a stationary, log linear form. The various tests and estimation of regression were carried out with the use of E-views version 11. Journal of Economics and Sustainable Development www.iiste.org ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.11, No.10, 2020 Table 1 above presents the statistics of key macroeconomic variables under consideration in log form. The average log of external reserves is 10.1556, while that of the exchange rate shows an average of 5.0372. Based on the results of the Jarque-Bera test for normality, we reject the null hypothesis of normality in the two series at 5% significant level. The Correlation Matrix It controls extent of affiliation prevailing amongst variables of consideration. The matrix determines the capacity and route of linear affiliation amongst two variables which exist in between as well as nearer the correlation coefficient is to −1/+1 a perfect linear and harder affiliation exists amongst the variables. Table 1, the correlation coefficients of the several variables (External reserves (EXTR), trade openness (TRADO), exchange rate (EXR), direct investment (DI), portfolio investment (PI), oil price (OP), Consumer price index (CPI), interest rate (INTR) is presented Table 2 Table2: Correlation matrix of the selected key macroeconomic indicators Table 2 gives correlation matrix for selected macroeconomic variables utilised in this research. Exchange rate and consumer price index has a correlation of 92.5% and is highly significant at 5% significant level. The correlation coefficient amongst external reserve with other key macroeconomic variables are displayed on column exchange rate precisely, the coefficients of external reserve versus direct investment, trade openness and oil price are 59.6%, 72.3% and 76.7% respectively. These three coefficients are quite high and are all significant ant 5% significant level. Other coefficient levels are clearly shown in Table 2. The Unit Root Test The tests done in this research shows that all variables LEXTR, LEXR, LOP, LDI, LPI, LTRADO, LCPI, and LINTR are stationary and integrated of order one except LGDP which is stationary at level. This is because the absolute values of the ADF test-statistic are greater than tabulated ADF critical values of variables at 5% level of significance except for LCPI which is at 10% level. This information is displayed below. Co-integration Test The Bounds co-integration test was conducted because series were integrated of diverse orders i.e. I (1) and I (0). From the co-integration test result, it was found that the variables for the equations were co-integrated as the null hypothesis is rejected following that the critical F-Statistics is larger than the value for the higher bound I (1) at 5% level of significance hence, approximation of both long and short run model. Table 4 presents the outcome of the limit's co-integration test conducted on the series. -3.43 -5.37 The diagnostic tests (serial correlation, normality, linearity and heteroskedasticity) carried out were all insignificant. The CUSUM and CUSUM square tests were conducted to check if there exist any recursive residual as ARDL is sensitive to it and from the test, it was found that there was no issue of recursive residual in terms of the mean as the plots of CUSUM and CUSUM square for the models were around the 5% critical bound. This implies the parameters of the model do not suffer any structural break within the scope of research. The outcome of the tests can be found below: Figures 1a and 1b. The CUSUM stability test shows that the External reserve function was stable for the entire period from 2010 and 2018. This is because at 5% significance level, the statistic completely lied within critical region. Similarly, the CUSUM squares test also shows that the external reserve model was stable at 5% critical level. F-stat.) 0.0000 As shown in above Table 5; The constants of the estimated ARDL all met the a priori expectation of positive coefficient indicating positive impact though some variables as LGDP, LEXTR, LEXR and LCPI showed a positive impact in the previous period. Estimated Long -run coefficients The co-integration test showed that the series were correlated, so we went ahead to estimate long and short run model as specified in equation (4) above. The second stage of the ARDL, the approximations of the long-run constants was computed using AIC. Table 6 displays the summary outcomes of the projected long -run constants. Error Correction Model The ECM corresponding to long run estimates for carefully chosen ARDL Model was evaluated. The estimated ECM of Table 7 has two portions. First part comprises the projected constants of short run dynamics and the second portion contains of the approximations of ECT regulates swiftness of modification in which short-run dynamics congregate to long-run symmetry track in the model. F-stat.) 0.0000 The short-run constants evaluations show the dynamic modification of several variables. The short run constants for all the variables are significant though some at previous period. The constant of ECM (-1) is negative and exceedingly substantial signifying that the variables are cointegrated completely. The projected cost of the constant shows that approximately 6.68 per cent of the volatility in external reserve is counterbalance by the short run modification. This, however, indicates a very slow swiftness of modification between the variables, meaning that previous errors of relationship in reserves and designated macroeconomic variables are adjusted in existing time however, at sluggish rate. Concluding Remarks This study has examined the movements and scientific connections of reserve with some macroeconomic variables, namely GDP, exchange rate, consumer price index, oil price, trade openness, level of capital inflow (direct investment and portfolio investment) and interest rate in Nigeria. The matters of structural breaks, cointegration and steadiness of external reserve drive in Nigeria throughout 2000:Q1 and 2018:Q4 were studied. The Gregory-Hansen test was used to perceive probable structural breaks and evaluate cointegrating model. The device determines short run dynamics with steadiness of foreign exchange drive. Nigeria's external reserves has been reliably pretentious by international oil price, economic and financial influences. On the short run when various prices in the economy can change, activities in nominal exchange rates can change comparative prices and interrupt external trade flows. The best Autoregressive Distributed Lag Estimates of the model was determined at ARDL(3,4,3,3,4,4,2,3,2). The CUSUM and CUSUMSQ functions of Brown et al. (1975) was used to adjust provided external reserve drive for Nigeria is steady during the study period. Each time the repetition of the remaining evaluated external reserve purpose is found in between borders of two risky lines, then there is an indication of constraint volatility at the time. In this study, CUSUM and CUSUMSQ models demonstrates outstanding repetition schemes for external reserve drive within 5% critical lines, showing indication of steady external reserve drive for Nigeria. This displays that the external reserve function was stable during the period 2000 to 2018, which therefore suggests that external reserve function is stable for the period under consideration. The results show that LDI, LCPI, LTRADO and LINTR positively affect the extent of reserve on the long-run whereas LOP, LDI, LPI, LTRADO and LINTR have positive effect, statistically significant and contributes to reserves position in Nigeria on the short-run. It is therefore necessary for policy makers in the country to always monitor the impact of these key macroeconomic variables on external reserves so as to help monetary authority in exchange rate policy and foreign investors in the investment models to adopt in both short and long run.
v3-fos-license
2023-02-08T06:17:49.339Z
2023-02-06T00:00:00.000
256629747
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1111/resp.14465", "pdf_hash": "f549f0e83284e4d9d5d9362178ae81dd3ac27ac1", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:77", "s2fieldsofstudy": [ "Political Science" ], "sha1": "28266f926686edb7fb5927068f918f0eb36c4b79", "year": 2023 }
pes2o/s2orc
The tobacco endgame for the Asia Pacific 0.8 Nearly 70 years after the British Doctors Study linked smoking to lung cancer, and cardiorespiratory disease, evidence of smoking's health impacts continues to amass. 1 The tobacco epidemic is one of the greatest public health threats, killing over 8 million people annually and costing 5.7% of global health expenditure and 1.8% of global gross domestic product. Almost 40% of this cost is borne by low and middle income countries. 2 Environmental costs are vast. 3 Annually, six trillion cigarette filters become the second most prevalent plastic pollution and the commonest plastic litter on beaches. 3 Global treaties support ending the tobacco epidemic. 4 The 2003 WHO Framework Convention on Tobacco Control (FCTC), the first global health treaty, has been ratified by 182 countries (>90% of global population covered). Tobacco control is integral to The United Nations Sustainable Development Goals. The WHO Global Action Plan for the Prevention and Control of Noncommunicable Diseases (NCD) 2013-2020 targets a 30% reduction in tobacco use by 2025 relative to 2010. Such efforts reduced smoking from 36.3% to 33.5% in men, and from 7.9% to 6.7% in women between 2009 and 2017. 5 However, no country has fully implemented all six FCTC/MPOWER measures. 4 Smoking disproportionately affects people in lower socioeconomic groups, marginalized groups and ethnic minorities. The current paradigm of incremental policy change will see tobacco continue as a leading cause of disease and health inequity for generations. In contrast, Tobacco Endgame strategies aim to rapidly and permanently reduce tobacco use to minimal levels, effectively ending the tobacco epidemic. 6 Multiple endgame strategies exist and most countries rely on a suite of policies. Endgame goals vary; some countries aim to eliminate all nicotine products, including smokeless tobacco and Electronic Nicotine Delivery Systems (e.g., Finland 7 ), others focus on combustible tobacco products (e.g., New Zealand 8 ). Most endgame goals include reaching <5% smoking prevalence between 2025 and 2040. 9 Multiple policies with endgame potential are described and some are now being implemented. McDaniel identified 16 policies in four focus areas: 'product', 'user', 'market/supply' and 'larger institutional structures'. 6 A scoping review found evidence syntheses for eight of these policies, the most researched being very low nicotine content cigarettes, retail restrictions, substitution with non-combustible products and stringent taxation. 9 Half the world's population lives in Asia Pacific, corresponding to two WHO Regions-Western Pacific (WPR-37 countries, 1.9 billion people) and South-East Asia (SEAR-11 countries, 1.97 billion people). These countries are at different stages of the tobacco epidemic. 10 Some are only just beginning to see prevalence start to decline, while others have reached 10% smoking prevalence after decades of declining prevalence. 11 Simultaneously, the region dominates tobacco production; 49% of production in 2020 coming from China and India. Two hundred forty-one million adults in SEAR and 388 million in WPR smoke. Adult smoking prevalence is projected to fall from 47% in 2000 to 25% by 2025 in SEAR, 12 although this will remain the highest prevalence of any region. It is projected to fall from 30% in 2000 to 22% by 2025 in WPR, however this falls short of the NCD 30% reduction target. 11 14.8 million children aged 13-15 years in SEAR, and 5.7 million children in WPR use tobacco. Smoking prevalence remains low in females, but much higher in males. 10 Low-level tobacco use is seen as a market opportunity by the Tobacco Industry and use is rising among women and girls. In eight SEAR and WPR countries, smoking prevalence in girls now exceeds prevalence in women, and in one WPR country more girls smoke than boys. 4 Sustained tobacco control measures have driven tobacco product diversification, growing markets for alternative products (e.g., ENDS, heated tobacco products and nicotine pouches), flavoured products and increasing Key points • The tobacco epidemic is one of the greatest public health threats, killing over 8 million people annually. • Despite sustained tobacco control over recent decades, more than 600 million people in the Asia Pacific region smoke. • Tobacco Endgame strategies aim to rapidly and permanently reduce tobacco use to minimal levels, effectively ending the tobacco epidemic. • New Zealand has introduced the first national endgame strategy in the region. concurrent use of multiple products. 13,14 SEAR has 81% of the world's smokeless tobacco users (240 million), which is 7 times more prevalent among women than smoked tobacco (11.5% and 1.6%, respectively). Tobacco flavourings are unregulated. WPR has the highest prevalence of menthol cigarette use worldwide (15% in 2020), comprising 21%-29% of the market in Japan, the Philippines and Malaysia and 48% in Singapore. 14 Much pioneering tobacco control work occurred in Asia Pacific, such as the first national smoke-free legislation and advertising bans (Singapore 1970(Singapore -1971, bans on smokeless product manufacture and sale (Hong Kong 1987) and plain packaging (Australia 2012). 17 Whilst heavily challenged by the tobacco industry, these policies have inspired others, for example, over 30 countries now have plain packaging laws. The challenges and obstacles to tobacco control in Asia Pacific are similar worldwide 17 with industry interference ever present, particularly during the COVID-19 pandemic. 4,17,18 Asia Pacific is also a pioneer in Tobacco Endgame policy. On 1 January 2023, New Zealand's Smokefree Environments and Regulated Products (Smoked Tobacco) Amendment Act came into force. The Act's clear equity focus aims to redress inequities between M aori and non-M aori communities, 8 shifting from individual blame to acknowledging the Tobacco Industry as the source of the problem. It includes a Smokefree Generation law banning tobacco sales to anyone born after 2008; reducing nicotine in cigarettes to very low levels to decrease addictiveness (≤0.8 mg/g tobacco); and reducing access to cigarettes, limiting supply to authorized stores, cutting tobacco outlets by 90%. Companies must also disclose sales, pricing, advertising, sponsorship and ingredients. Funding for health services, campaigns and smoking cessation has increased. This policy package could reduce adult smoking prevalence to 7.3% in 2025 for M aori, and 2.7% for non-M aori (Figure 1). 19 In summary, implementation of FCTC/MPOWER tobacco control interventions significantly reduced smoking over the past two decades in the Asia Pacific region. However, millions still smoke and younger generations remain at risk from traditional and novel products. Recognizing the limitations of tobacco control policies, Tobacco Endgame approaches attempt to hasten the end of the tobacco epidemic via bold policies that more directly address key epidemic drivers: tobacco product addictiveness and widespread availability. New Zealand has introduced the first national endgame strategy in the region, setting the standard for other countries to follow.
v3-fos-license
2023-06-05T15:00:55.827Z
2023-06-03T00:00:00.000
259071919
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1155/2023/2702882", "pdf_hash": "b571f90f35b6edcf85cef6239ba2cf7a634aaac9", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:78", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "4505a9456b319168e489f85f360bdf9ea7df3fea", "year": 2023 }
pes2o/s2orc
Effects of CAR-T Cell Therapy on Immune Cells and Related Toxic Side Effect Analysis in Patients with Refractory Acute Lymphoblastic Leukemia Objective To observe the effects of chimeric antigen receptor T (CAR-T) cell immunotherapy on immune cells and related toxic side effects in patients with refractory acute lymphoblastic leukemia (ALL). Methods A retrospective study was conducted in 35 patients with refractory ALL. The patients were treated with CAR-T cell therapy in our hospital from January 2020 to January 2021. The efficacy was evaluated at one and three months post treatments. The venous blood of the patients was collected before treatment, 1 month after treatment, and 3 months after treatment. The percentage of regulatory T cells (Treg cells), natural killer (NK) cells, and T lymphocyte subsets (CD3+, CD4+, and CD8+ T cells) was detected by flow cytometry. The ratio of CD4+/CD8+ was calculated. Patient's toxic side effects such as fever, chills, gastrointestinal bleeding, nervous system symptoms, digestive system symptoms, abnormal liver function, and blood coagulation dysfunction were monitored and recorded. The incidence of toxic and side effects was calculated, and the incidence of infection was recorded. Results After one month of CAR-T cell therapy in 35 patients with ALL, the efficacy evaluation showed that complete response (CR) patients accounted for 68.57%, CR with incomplete hematological recovery (CRi) patients accounted for 22.86%, and partial disease (PD) patients accounted for 8.57%, and the total effective rate was 91.43%. In addition, compared with that before treatment, the Treg cell level in CR+CRi patients treated for 1 month and 3 months decreased prominently, and the NK cell level increased dramatically (P < 0.05). Compared with that before treatment, the levels of CD3+, CD4+, and CD4+/CD8+ in patients with CR+CRi in the 1-month and 3-month groups were markedly higher, and the levels of CD4+/CD8+ in the 3-month group were memorably higher than those in the 1-month group (P < 0.05). During CAR-T cell therapy in 35 patients with ALL, fever accounted for 62.86%, chills for 20.00%, gastrointestinal bleeding for 8.57%, nervous system symptoms for 14.29%, digestive system symptoms for 28.57%, abnormal liver function for 11.43%, and coagulation dysfunction for 8.57%. These side effects were all relieved after symptomatic treatment. During the course of CAR-T therapy in 35 patients with ALL, 2 patients had biliary tract infection and 13 patients had lung infection. No correlations were found between the infection and age, gender, CRS grade, usage of glucocorticoids or tocilizumab, and laboratory indicators such as WBC, ANC, PLT, and Hb (P > 0.05). Conclusion CAR-T cell therapy had a good effect on patients with refractory ALL by regulating the immune function of the body via mediating the content of immune cells. CAR-T cell therapy may have therapeutic effect on refractory ALL patients with mild side effects and high safety. Introduction Acute lymphoblastic leukemia (ALL) is a clinically common hematological tumor, accounting for about 20% to 30% of acute leukemia in adults. Clinical manifestations of ALL include the inhibition of bone marrow hematopoietic function and the proliferation and infiltration of leukemia cells, etc. ALL has a high recurrence rate and poor prognosis, which seriously affects the life of patients [1,2]. The incidence rate of ALL is high, accounting for 15% of leukemia and about 35% of acute leukemia. At present, the main clinical treatment of ALL is chemotherapy and/or hematopoietic stem cell inhibition therapy. However, the recurrence rate and mortality rate of patients are still at a high level. Previous studies have concluded that chemotherapy has a two-year disease-free survival rate of 39.0% and a two-year overall survival rate of 58.4% for patients with ALL [3,4]. How to improve the prognosis of refractory ALL patients has become the focus of current medical research. Chimeric antigen receptor T (CAR-T) cell is a T cell with the ability to recognize and kill tumors, and key cytokines such as IL-12 could be expressed on the basis of whose organizational structure. Intensifying the activation reaction of T cells has a good effect in the treatment of ALL and can improve the long-term survival rate of patients by enhancing the immune response [5,6]. In recent years, foreign scholars believe that CAR-T cell therapy has shown good efficacy in children with refractory ALL, with tolerable safety, high response rate, and excellent persistence [7]. However, the effects of CAR-T cells on immune cells in patients with refractory ALL have been reported less. In this study, 35 patients with refractory ALL admitted in our hospital during January 2020 to January 2021 were chosen as subjects, aiming to analyze the effects of CAR-T immunotherapy on immune cells and related toxic side effects in patients with refractory ALL. General Materials. Thirty-five patients with refractory ALL during January 2020 to January 2021 were chosen as subjects. The clinical data of the patients were collected and retrospectively analyzed. Inclusion criteria were as follows: (1) all patients were initially diagnosed patients who failed to respond to two standard protocols or ALL patients who recurred within 12 months after consolidation and intensive treatment after CR or who recurred twice or more [8]. (2) The patient's age was between 18 and 65 years old. (3) The patients and their family members were informed and had good compliance and could cooperate with the examination and treatment. All of them signed an informed consent form. Exclusion criteria were as follows: (1) patients with severe cardiovascular and cerebrovascular diseases, (2) patients with nervous system diseases, (3) patients with other malignant tumors, and (4) patient complicated with infection. The subjects included 18 males and 17 females, with an average age of 38:16 ± 6:85 years. The operation of this experiment was approved by the hospital Ethics Association. The experimental process is shown in Figure 1. Methods. To prepare CAR-T cells, 40-60 mL of peripheral blood was collected from experimental subjects, anticoagulation with heparin: T lymphocytes were activated and expanded after isolation and purification, and CAR-T cells were amplified again after specific CAR transfection. CAR-T cells were frozen after quality inspection. CAR-T cells were infused intravenously for the treatment. All patients received chemotherapy about 30 days before CAR-T cell infusion. Appropriate chemotherapy programs were chosen by physicians according to the patient's condition and previous treatment to reduce the tumor load of patients, aiming to prevent the occurrence of cytokine release syndrome (CRS) or reduce the severity of CRS. Detect the recovery of patient's blood routine. About 3 days before cell infusion, fludarabine (Flu)+cyclophosphamide (CY) pretreatment scheme (FC) was given as follows: Flu 25 mg/m 2 and CY 300 mg/m 2 /d, continuously injected intravenously for 3 days. According to the existing literature report in the United States [7], in the CAR-T cell reinfusion therapy, the amount of CAR-T cell reinfusion should be 10 6 -10 7 cells/ kg body weight. According to the number of T cells collected from the patient, the reinfusion plan was formulated as appropriate, and the infusion volume was 10 6 -10 7 cells/kg body weight. Outcome Measures 2.3.1. Efficacy Analysis. The efficacy evaluation included complete response (CR), CR with incomplete clinical recovery (CRi), and partial disease (PD). Among them, CR represented the recovery of bone marrow hematopoiesis, with primitive cells < 5%, no primitive cells in peripheral blood, absolute value of platelets > 100 × 10 9 /L, absolute value of neutrophil > 1 × 10 9 /L, and no recurrence occurs in 4 weeks. CRi referred to the recovery of bone marrow hematopoiesis, with primitive cells < 5% and no primitive cells in peripheral blood, but the absolute value of patient's platelet ≤ 100 × 10 9 /L, absolute value of neutrophil ≤ 1 × 10 9 /L, and no recurrence in 4 weeks. PD represented 25% increase of primitive cells found in bone marrow or peripheral blood, or extramedullary infiltration occurs. Detection of Serum Indicators. The venous blood of patients was collected before treatment, 1 month after treatment, and 3 months after treatment and stored in heparin anticoagulant tubes. After centrifugated at 1500 r/min for Toxic and Side Effects. Closely observe the patient's condition changes and monitor and record the patient's toxic and side effects such as fever, chills, gastrointestinal bleeding, nervous system symptoms (dizziness, headache, irritability, aphasia, photophobia, etc.), digestive system symptoms (vomiting, nausea, etc.), abnormal liver function, and blood coagulation dysfunction. The incidence of side effects was calculated. Infection. After CAR-T cell infusion treatment, the presence of infection was comprehensively judged according to laboratory indicators, imaging, histopathology, and/or microbiological examination. Infection within 30 days after infusion requires intravenous antibiotics or hospitalization when severe infection occurs. The age, gender, cytokine release syndrome (CRS) grade (Table 1), whether there was a usage of glucocorticoid and tocilizumab, and laboratory indicators including white blood cell (WBC), neutrophil (ANC), platelet count (PLT), and hemoglobin (Hb) were collected. Statistical Analysis. SPSS 20.0 software was used to analyze the experimental data. Measurement data such as Treg cells, NK cells, CD3+, CD4+, and CD4+/CD8+ were represented as ( x ± s). Repeated measurement ANOVA was used for comparison among groups, and these with statistical differences were further compared with the Tukey test. The SNK-q test was used for comparison of multiple sample averages between groups. Enumeration data such as curative effect and adverse reaction were expressed as %, and χ 2 test was used for comparison between groups. P < 0:05 indicated that the statistical results were statistically significant. Results 3.1. Analysis of Curative Effect after Treatment. After one month of CAR-T cell therapy in 35 patients with ALL, the efficacy evaluation showed that CR patients accounted for 68.57% (24 cases), CRi patients accounted for 22.86% (8 cases), and PD patients accounted for 8.57% (3 cases), and the total effective rate was 91.43% ( Figure 2). Comparison of Treg Cell Level before and after Treatment. The patients were grouped according to the time after treatment, and the Treg cell level was detected. Compared with that before treatment, the Treg cell level in CR +CRi patients treated for 1 month and 3 months decreased prominently, and the NK cell level increased dramatically (P < 0:05, Table 2). Comparison of T Lymphocyte Subsets before and after Treatment. Compared with that before treatment, the levels of CD3+, CD4+, and CD4+/CD8+ in patients with CR +CRi in the 1-month and 3-month groups were markedly higher, and the levels of CD4+/CD8+ in the 3-month group were memorably higher than those in the 1-month group (P < 0:05, Table 3). Analysis of Side Effects after Treatment. During CAR-T cell therapy in 35 patients with ALL, fever accounted for 62.86%, chills for 20.00%, gastrointestinal bleeding for 8.57%, nervous system symptoms for 14.29%, digestive system symptoms for 28.57%, abnormal liver function for 11.43%, and coagulation dysfunction for 8.57%. These Table 4). Discussion ALL is a common malignant hematological disease, accounting for 80% of acute leukemia in children. At present, the clinical treatment effect of ALL is good with the CR rate of as high as 70%-90%, but some patients show refractory ALL or easy to relapse. About 30% of ALL patients have relapses after conventional induction remission, and the cure rate of refractory or relapses is low, only about 5% [9,10]. Therefore, how to improve the cure rate and quality of life of patients with refractory ALL has become the focus of current research. CAR-T cell therapy is to transform T cells into T cells carrying a single-chain variable fragment (scFV) with specific extracellular recognition antigen by genetic engineering, which has the function of targeting to kill tumor cells. After in vitro expansion, CAR-T cells are transfused back to patients to improve their immune function and kill abnormal leukemia cells in the body, which can help improve the life quality and prolong the lives of patients with ALL [11,12]. As a new immunotherapy, CAR-T cell therapy provides a treatment option for hematologic malignancies. At the same time, some scholars are studying the application value of CAR-T cell therapy in solid tumors. Berdeja et al. [13] have concluded that the CAR-T cell assay shows a high-quality response, and a single cilta-cell infusion at a target dose of 0:75 × 10 6 CAR-positive live T cells per kilogram can produce an early, deep, and durable response in patients with multiple myeloma who have undergone extensive pretreatment, with a controllable safety profile. In this study, after one month of CAR-T cell therapy, the efficacy , digestive system symptoms for 28.57%, abnormal liver function for 11.43%, and coagulation dysfunction for 8.57%. These side effects were all relieved after symptomatic treatment. These results suggested that CAR-T cells were effective in the treatment of refractory ALL without serious side effects. After symptomatic treatment, the side effects were relieved with high safety, which was similar to the research results of Li and Chen [14]. Studies have proved that the occurrence and development of ALL are closely related to the changes of immune function [15,16]. The change of cellular immunity is more closely related to ALL. Among them, Treg cells, NK cells, and T lymphocyte subsets can mediate the change of cellular immune function. Treg cells can regulate the body's peripheral tolerance monitoring and autoimmune response, so leukemia cells will be regarded as normal cells, and the effect of immunotherapy will be weakened by inhibiting specific antitumor T cells [17,18]. Niu et al. [19] believed that the increase of Treg cell level indicated the failure of ALL treatment or recurrence. NK cells have cytotoxic function and immune regulation function and are the first line of defense against infection and tumor. After a period of effective treatment for ALL, the NK cell level gradually recovers and the immune function of the body is enhanced [20]. T lymphocyte subsets mainly include CD3+, CD4+, and CD8+. Among them, the level of CD3+ reflects the number of T lymphocytes in the body, CD4+ determines the change of immune cell function in the body, and CD8+ is an immunosuppressive factor. Therefore, the ratio of CD4+/CD8+ reflects the changes of cellular immune function [21]. In this study, compared with that before treatment, the Treg cell level in CR+CRi patients treated for 1 month and 3 months decreased prominently, and the NK cell level increased dramatically. Compared with that before treatment, the levels of CD3+, CD4+, and CD4+/CD8+ in patients with CR+CRi in the 1-month and 3-month groups were markedly higher, and the levels of CD4+/CD8+ in the 3-month group were memorably higher than those in the 1-month group. It was suggested that CAR-T cell therapy improved the cellular immune function of the body by mediating the changes of Treg cells, NK cells, and T lymphocyte subsets, which was helpful to control the patient's condition. During the course of CAR-T therapy in 35 patients with ALL, 2 patients had biliary tract infection and 13 patients had lung infection. In addition, no correlations were found between the infection and the age, gender, CRS grade, usage of glucocorticoids or tocilizumab, and laboratory indicators such as WBC, ANC, PLT, and Hb. No related factors of infection were found in the present study. However, some reports indicated that [22,23] the more severe CRS after CAR-T cell treatment was, the greater the possibility of infection was. After infusion of CAR-T cells, it is sometimes difficult to distinguish infection and CRS reaction. Thus, the prevention, diagnosis, and treatment of infection after CAR-T treatment need further research. Studies have also found that early stage after CAR-T infusion is affected by pretreatment by depigmentation, and later stage is related to cytokine-mediated cytopenias. Some patients may experience prolonged cytopenia and require blood transfusion or growth factor support, during which the patient's immune function is significantly reduced and the infection rate increases [24,25]. The absence of infectious factors that was identified in this study may be due to the small number of cases included in this study and the single-centre retrospective study, which might have data bias. In general, CAR-T cell therapy had a good effect on patients with refractory ALL by regulating the immune function of the body via mediating the content of immune cells. This therapy may have a therapeutic effect on refractory ALL patients with mild side effects and high safety. However, due to the limitation of research time and sample size in this experiment, the long-term prognosis of patients was not followed up and there was no control group. In the future, the mechanism of Treg cells influencing the curative effect of CAR-T cells will be further clarified through in vivo and in vitro experiments. We need to verify whether the therapeutic effect of CAR-T cells can be improved and the recurrence can be reduced by intervening Treg cells. Therefore, the sample size and research time will be expanded for indepth exploration in our following study. Data Availability The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest The authors declare that they have no competing interests.
v3-fos-license
2014-10-01T00:00:00.000Z
2011-03-16T00:00:00.000
7778473
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-11-97", "pdf_hash": "ac94ae9a3577d5a8772f960c4b14b7d024d3b501", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:79", "s2fieldsofstudy": [ "Medicine" ], "sha1": "fcbb837c6d575246fafc486136c4574d5ccc4599", "year": 2011 }
pes2o/s2orc
CXCL12 expression by healthy and malignant ovarian epithelial cells Background CXCL12 has been widely reported to play a biologically relevant role in tumor growth and spread. In epithelial ovarian cancer (EOC), CXCL12 enhances tumor angiogenesis and contributes to the immunosuppressive network. However, its prognostic significance remains unclear. We thus compared CXCL12 status in healthy and malignant ovaries, to assess its prognostic value. Methods Immunohistochemistry was used to analyze CXCL12 expression in the reproductive tracts, including the ovaries and fallopian tubes, of healthy women, in benign and borderline epithelial tumors, and in a series of 183 tumor specimens from patients with advanced primary EOC enrolled in a multicenter prospective clinical trial of paclitaxel/carboplatin/gemcitabine-based chemotherapy (GINECO study). Univariate COX model analysis was performed to assess the prognostic value of clinical and biological variables. Kaplan-Meier methods were used to generate progression-free and overall survival curves. Results Epithelial cells from the surface of the ovary and the fallopian tubes stained positive for CXCL12, whereas the follicles within the ovary did not. Epithelial cells in benign, borderline and malignant tumors also expressed CXCL12. In EOC specimens, CXCL12 immunoreactivity was observed mostly in epithelial tumor cells. The intensity of the signal obtained ranged from strong in 86 cases (47%) to absent in 18 cases (<10%). This uneven distribution of CXCL12 did not reflect the morphological heterogeneity of EOC. CXCL12 expression levels were not correlated with any of the clinical parameters currently used to determine EOC prognosis or with HER2 status. They also had no impact on progression-free or overall survival. Conclusion Our findings highlight the previously unappreciated constitutive expression of CXCL12 on healthy epithelia of the ovary surface and fallopian tubes, indicating that EOC may originate from either of these epithelia. We reveal that CXCL12 production by malignant epithelial cells precedes tumorigenesis and we confirm in a large cohort of patients with advanced EOC that CXCL12 expression level in EOC is not a valuable prognostic factor in itself. Trial Registration ClinicalTrials.gov: NCT00052468 Results: Epithelial cells from the surface of the ovary and the fallopian tubes stained positive for CXCL12, whereas the follicles within the ovary did not. Epithelial cells in benign, borderline and malignant tumors also expressed CXCL12. In EOC specimens, CXCL12 immunoreactivity was observed mostly in epithelial tumor cells. The intensity of the signal obtained ranged from strong in 86 cases (47%) to absent in 18 cases (<10%). This uneven distribution of CXCL12 did not reflect the morphological heterogeneity of EOC. CXCL12 expression levels were not correlated with any of the clinical parameters currently used to determine EOC prognosis or with HER2 status. They also had no impact on progression-free or overall survival. Conclusion: Our findings highlight the previously unappreciated constitutive expression of CXCL12 on healthy epithelia of the ovary surface and fallopian tubes, indicating that EOC may originate from either of these epithelia. We reveal that CXCL12 production by malignant epithelial cells precedes tumorigenesis and we confirm in a large cohort of patients with advanced EOC that CXCL12 expression level in EOC is not a valuable prognostic factor in itself. Trial Registration: ClinicalTrials.gov: NCT00052468 Background Epithelial ovarian cancer (EOC) has one of the highest mortality rates of all gynecologic malignancies. It is the sixth most common cancer and the fifth most common cause of cancer-related death among women in developed countries [1]. Due to the silent nature of earlystage disease, most women with EOC have disseminated disease (i.e. expansion in the peritoneum and metastasis in the omentum) at the time of diagnosis and present an advanced stage of the disease, with a five-year survival rate below 30% [2]. Despite the high incidence and mortality rates, the etiology of EOC and the molecular pathways underlying its progression remain poorly understood. According to the International Federation of Gynecology and Obstetrics (FIGO), clinical stage, histologic grade and postoperative residual tumor mass are the most important prognostic factors in patients with EOC [3]. However, clinical factors and derivative prognostic models remain inadequate for the accurate prediction of outcome for a specific patient, indicating a need for the identification of biological factors to improve prognostic assessment. This aspect has recently been addressed with the identification of several biomarkers for the identification of histologic subtypes and the more accurate prediction of patient outcome [4][5][6][7][8][9]. Chemokines and their receptors have been known for many years to influence the development of primary epithelial tumors, in which they regulate the proliferation and survival of tumor cells, tumor-infiltrating leukocytes, angiogenesis and metastasis [10][11][12]. In epithelial cancers, these molecules play a key role in controlling both autocrine and paracrine communication between the different cell types of the tumor microenvironment [13]. Thus, chemokines and their receptors may constitute new biomarkers of potential prognostic value in various cancers, including EOC. In this study, we focused on the α-chemokine stromal cell-derived factor-1 (SDF-1)/CXCL12, which, together with its receptors CXCR4 and CXCR7, constitutes the chemokine/receptor axis attracting the greatest level of interest in oncology [11,14]. In EOC, CXCL12 products (i.e. protein and mRNA) have been detected in tumor cells [15,16]. We previously showed that CXCL12 orchestrates the recruitment of pre-DC2s and protects them from tumor macrophage IL-10-promoted apoptosis, thereby contributing to the immunosuppressive network within the tumor microenvironment [15]. In addition, CXCL12 regulates tumor angiogenesis, a critical step in tumor growth. Indeed, we have shown that hypoxia triggers the production of CXCL12 and vascular endothelium growth factor (VEGF) by EOC, with these two molecules acting in synergy to enhance tumor angiogenesis in vivo [17]. CXCL12 also acts on tumor cell proliferation and survival and, through its main receptor CXCR4, governs the migration of malignant cells and their invasion of the peritoneum, a major route for ovarian cancer spread [16,[18][19][20]. Other factors must also been considered, but previous observations strongly suggest that CXCL12 provides the autocrine and paracrine signals controlling malignant progression in EOC [11]. Some recent studies have investigated CXCL12 status in EOC, and reported no prognostic significance of CXCL12 production [11,21,22]. However, the results were obtained with ovarian cancer specimens from patients undergoing chemotherapy via heterogeneous protocols, with a follow-up period of less than four years. The prognostic significance of CXCL12 production by ovarian cancer cells remains to be clearly assessed in larger cohorts of EOC patients undergoing the same type of chemotherapy and followed up for longer periods. Furthermore, the pattern of CXCL12 expression in healthy ovaries and in benign and borderline ovarian tumors has scarcely been investigated. Elucidation of these points is required to determine whether CXCL12 production is associated with the malignant process and whether it constitutes a valuable prognostic factor in EOC. In this study, we investigated CXCL12 status in the reproductive tracts of healthy women. We studied the ovarian surface epithelium (OSE) and fallopian tubes, both of which are considered probable sources of EOC [23,24]. We also investigated CXCL12 status in benign and borderline epithelial tumors, and in a series of 183 patients with advanced primary EOC enrolled in a multicenter prospective clinical trial of paclitaxel/carboplatin/gemcitabine (TCG)-based chemotherapy [ClinicalTrials.gov Identifier: NCT00052468]. We quantified CXCL12 by immunohistochemistry (IHC) in EOC specimens and further assessed its potential association with clinical and pathologic features, including staging parameters and tumor histotypes, and with the expression of HER2, a tyrosine kinase receptor that may influence outcome when overexpressed [21,25]. Finally, we investigated whether the production of CXCL12 within the tumor affected progression-free survival (PFS) and the overall survival (OS) of patients with advanced primary EOC. Ethics statement We included 183 patients with advanced primary EOC (FIGO stage Ic-IV) in this study. All had been enrolled in the GERCOR-AGO-OVAR-9 large phase III randomized trial of first-line TCG-based chemotherapy (GINECO study) conducted at 58 French centers from July 2002 to April 2004 [ClinicalTrials.gov Identifier: NCT00052468] [25][26][27]. Formalin-fixed, paraffinembedded tumors from primary surgery were obtained with the approval of the institutional review board of the corresponding center (CCPPRB number: 02780) after inclusion of the patient in the clinical trial. Formalin-fixed and paraffin-embedded specimens recovered from five healthy ovaries (mostly contralateral to the malignant ovary), eight benign tumors (4 serous and 4 mucinous), eight borderline tumors (4 serous and 4 mucinous), and three non epithelial ovarian tumors (2 granulosa tumors and 1 dysgerminoma) were provided from the archives of patients treated at Antoine-Béclère Hospital (Service d'Anatomie et de Cytologie Pathologiques, Clamart, France) between 1998 and 2007. Approval was obtained from the ethics commission of Antoine-Béclère Hospital for all analyses of tumor material from the archives initially obtained for routine diagnostic and therapeutic purposes. This study was carried out in accordance with good clinical practice guidelines, national laws, and the Declaration of Helsinki. All patients provided written informed consent. Cell enrichment Tumor cell enrichment from malignant ascites was based on the expression of CD326, a human epithelial antigen also known as EpCAM, one of the most frequently identified and highly expressed biomarkers in EOC [28]. CD326 + cells were positively selected on AutoMACs columns (Myltenyi Biotech, Paris, France), from ascites samples collected with ethics committee (Antoine-Béclère Hospital) approval from one patient (FIGO stage IV) diagnosed with invasive EOC with peritoneal extension, as previously described [29]. In the positive fraction, the percentage of CD326 + cells was >80%, whereas the negative fraction contained mostly CD45 + leukocytes, as determined by flow cytometry (FACSCalibur, BD Biosciences, Le Pont De Claix, France) with FITC-conjugated anti-human CD45 (clone H130, IgG1, BD Biosciences) and PE-conjugated antihuman CD326 (clone HEA 125, IgG1, Myltenyi Biotec) monoclonal antibodies (mAb). Ascites samples were also analyzed for CXCL12 content with the human CXCL12/ SDF-1α Quantikine ELISA kit (R&D Systems, Lille, France), according to the manufacturer's instructions. Immunostaining grading and score CXCL12 was localized immunohistochemically on 4 μm sections of paraffin-embedded tissues (healthy ovaries, benign, borderline and non epithelial tumors) and on tissue microarrays (TMAs) of EOC specimens. Identical experimental protocols were used for immunohistochemistry (IHC) on conventional slides and TMAs. Sections were deparaffinized and rehydrated and then treated with citrate buffer pH6 and heated in a microwave oven. For CXCL12 immunostaining, we used a mAb against CXCL12 (clone K15C, IgG2a) at a concentration of 1.37 μg/ml. This mAb has already been widely used for the detection of CXCL12 in mesothelial cells, ovarian cancer cells and breast carcinomas [15,[30][31][32][33]. The binding of the K15C mAb was detected by the streptavidine-biotin peroxidase method (LSAB kit, Dako, Trappes, France). Sections were then counterstained with hematoxylin. Images were obtained with a Leica DMLB microscope equipped with standard optic objectives, at the indicated magnifications, and digitized directly with a Sony 3CCD color video camera. RT-PCR analyses Total cellular RNA was extracted from freshly frozen ovarian tissue samples, with the RNeasy Mini kit (Qiagen, Courtaboeuf, France). It was then reverse-transcribed with random hexamers (Roche Diagnostics, Meylan, France) and Moloney murine leukemia virus reverse transcriptase (Fisher Bioblock, Illkirch, France). The resulting cDNAs (1 μg) were then amplified by semi-quantitative PCR (2 min at 94°C followed by 33 cycles of 30 s at 61°C or 55°C for CXCL12 and b-actin, respectively) with forward (658-677) We used the ABI 7300 Sequence Detection System (Applied Biosystems, Courtaboeuf, France) with the following amplification scheme: 95°C for 10 min and 45 cycles of 95°C for 10 s, 68°C for 10 s and 72°C for 5 s. The dissociation curve method was applied, according to the manufacturer's protocol (60°C to 95°C), to ensure the presence of a single specific PCR product. The standard curve method was used for analysis, and results are expressed as CXCL12/b-actin ratios. Statistical analyses For the series of 183 patients included in the GINECO clinical trial, the relationship between CXCL12 expression and clinical and pathologic features was assessed with t-tests (continuous variables) or Fisher's exact tests (binary variables). Overall survival (OS) was calculated from the date of inclusion to death and progression-free survival (PFS) was calculated from the date of inclusion until progression or last follow-up examination. Progression was defined as a 20% increase in the diameter of all measured lesions, the appearance of new lesions and/or the doubling of CA125 tumor marker concentration from baseline values. Kaplan-Meier analysis was carried out to generate PFS and OS curves. Univariate COX model analysis was carried out to assess the prognostic influence of clinical and biological variables. Hazard ratios (HR) and 95% confidence intervals (CIs) were determined. Analyses were performed with R software (The R Foundation for Statistical Computing ISBN 3-900051-07-0, http://www.r-project.org). For comparisons of CXCL12 mRNA levels in EOC samples, we used unpaired two-tailed Student's t tests (Prism software, GraphPad). P values <0.05 were considered statistically significant. Detection of CXCL12 products in healthy and malignant ovarian epithelial cells The cellular expression of CXCL12 was examined by IHC on sections isolated from five healthy ovaries, eight serous or mucinous benign tumors (some still containing normal ovarian tissue), eight serous or mucinous borderline epithelial tumors, three non epithelial ovarian tumors (i.e. 2 granulosa tumors and 1 dysgerminoma), and 183 invasive EOC. CXCL12 was clearly detected in cells of the OSE and the fallopian tube epithelium (Figure 1A). By contrast, CXCL12 was absent from both ovarian follicles and oocytes. CXCL12 immunoreactivity was detected in epithelium-derived proliferating tumor cells from benign tumors ( Figure 1B). In both serous and mucinous borderline tumors and in serous, clearcell, endometrioid and mucinous EOC specimens, CXCL12 was heterogeneously distributed in malignant cells, defining low and high expression profiles ( Figures 1C and 1D). CXCL12 was confined to the cytoplasm of malignant epithelial cells, with particularly strong staining of the membrane frequently observed, and was not detected in nuclei ( Figure 1D). CXCL12 was barely detectable in the stroma, and tumor epithelial cells are thus probably the principal source of CXCL12 in EOC. Epithelial cells isolated from malignant ascites and identified as CD326 + cells were stained for CXCL12, whereas their CD326non epithelial counterparts, consisting mostly of CD45 + leukocytes, were not stained for CXCL12 ( Figure 1E). CXCL12 was also absent from non epithelial ovarian tumors ( Figure 1F). A similar pattern was observed for CXCL12 mRNA, as shown by conventional and real-time PCR (Figure 2). CXCL12 was assayed in the culture medium of three ovarian cancer cell lines, SKOV-3, OVCAR-3 and BG-1, and in malignant ascites. We found that CXCL12 was produced in all cases, at concentrations of 2 to 10 ng/ml. Thus, the epithelial cells of both the OSE and fallopian tubes constitutively produce CXCL12 in the reproductive tracts of healthy women. CXCL12 was recovered from benign, borderline and malignant epithelial tumors but not from non epithelial ovarian tumors. Correlation of CXCL12 expression with clinical and pathological characteristics We evaluated the prognostic value of CXCL12 for EOC, by quantifying CXCL12 staining in 183 ovarian cancer specimens. The mean age of the patients at initial diagnosis was 59 years (range 25-77); 68% of patients had serous adenocarcinomas and 85% had stage III/IV disease. Median follow-up for patients was 69 months. CXCL12 was heterogeneously distributed in tumor cells, with some cells displaying no detectable staining and others, strong immunoreactivity. CXCL12 expression ranged from high levels (scores 5-7) in 86 (47%) specimens to an absence of staining in 18 (<10%) cases. We applied a single cut-off at score 4, the median and mean value of the entire cohort, for the identification of samples producing low-moderate (CXCL12 low/moderate , scores 0-4, n = 97) and high (CXCL12 high , scores 5-7, n = 86) levels of CXCL12. The median age of the patients was 59 (range 25-77) in the CXCL12 low/moderate group and 57 (range 33-75) in the CXCL12 high group. There was thus no significant difference in patient age between these two groups (P = 0.31). Statistical analyses of CXCL12 high and CXCL12 low/moderate immunostaining and classical clinical parameters, such as histotype, HER2 status, FIGO stage, ascites and size of residual tumor after first laparotomy, revealed no significant correlation of CXCL12 status with any of the parameters tested (Table 1). Thus, CXCL12 expression does not reflect the clinical status of OEC. Correlation of CXCL12 expression and patient outcome We then investigated whether CXCL12 high or CXCL12 low/moderate status affected OS and/or PFS. As expected, univariate analysis validated, for this series, known prognosis factors such as performance status, FIGO stage, presence of ascites and residual tumor after first laparotomy, which were associated with shorter OS and PFS (Table 2). In our large and homogeneous cohort, CXCL12 expression levels had no effect on OS or PFS (Table 2 and Figure 3). Thus, CXCL12 expression by tumor epithelial cells is not in itself a valuable prognostic factor in patients with advanced EOC. Discussion In this study, we demonstrate the previously unappreciated constitutive expression of CXCL12 by healthy ovarian epithelial cells and ovarian epithelial tumor cells, whether benign or malignant. CXCL12 was recovered from both the OSE and the epithelium of the fallopian tubes, both of which are considered possible origins of EOC. By contrast, it was not detected in follicles and oocytes or in malignant tumor cells arising from them, granulosa tumors or dysgerminomas. Furthermore, CXCL12 expression correlated neither with clinical parameters nor with HER2 status in specimens from 183 patients with advanced primary EOC enrolled in a multicenter clinical trial of first-line TCGbased chemotherapy (GINECO study). The intratumoral production of CXCL12 does not reflect the morphological heterogeneity of EOC and has no impact on PFS or OS after adjustment for established prognostic factors. Thus, CXCL12 is expressed by ovarian epithelial cells before tumorigenesis and does not constitute a valuable prognostic factor in EOC patients. Recent studies on the origin and histogenesis of EOC have proposed that type I tumors, which are believed to include all major histotypes, originate from the OSE, whereas type II tumors, which are thought to consist almost exclusively of high-grade serous carcinomas, arise from the distal region of fallopian tubes [4,24,34]. Our findings clearly demonstrate that CXCL12 is constitutively produced by the epithelial cells of the OSE, whereas such expression was not previously suspected [11,16]. This apparent discrepancy may result from differences between our experimental protocol and those used in previous studies. For example, we used an anti-CXCL12 mAb rather than a polyclonal Ab and an additional microwave pretreatment for antigen retrieval, both of which would have increased the sensitivity of immunostaining. CXCL12 was also recovered from the epithelial cells of fallopian tubes, which were recently identified as a possible origin of high-grade serous EOC and which have a Müllerian duct-derived embryologic origin in common with the OSE [23,28,35]. By contrast, CXCL12 was undetectable in follicles, oocytes and their malignant non epithelial counterparts. Thus, CXCL12 is a chemokine constitutively produced by epithelial ovarian cells, from both healthy and malignant tissues. CXCL12 is present in ovarian epithelial cells before they become malignant and is therefore not useful as a marker of malignancy in EOC. Scotton and coworkers reported a trend toward stronger CXCL12 expression in higher grade tumors [16]. However, Pils and coworkers recently found that the abundance of CXCL12 did not differ between borderline and malignant tumors [21]. In the present work, CXCL12 expression was detected in benign tumors as well as in borderline and malignant tumors. Although CXCL12 is unevenly distributed in low-grade (i.e. borderline and stage I) and in more advanced stage tumors, we have no evidence that its expression level is weaker in low-grade tumors. Among CXCL12-positive EOC specimens, we observed no significant differences in the fraction of CXCL12 high -producing tumors for the four histotypes examined (i.e. serous, clear-cell, endometrioid and mucinous), despite previous reports of differences in epidemiologic and genetic changes, tumor markers and response to treatment (reviewed in [11]). We suggest that CXCL12 production levels overlap with EOC histotype differentiation and staging. Consistent with previous findings [16,22], CXCL12 was detected in more than 90% of patients with advanced primary EOC. However, it was barely detectable in the remaining cases (<10%), suggesting that CXCL12 expression might have been silenced, possibly through epigenetic mechanisms, such as promoter hypermethylation, a phenomenon already reported for colon carcinoma and breast cancer [36,37]. Indeed, further in-depth studies are required to determine whether transcriptional regulatory mechanisms account for heterogeneous CXCL12 production in EOC. Recent studies have assessed the prognostic significance of CXCL12 expression in various cancers, including colorectal carcinoma [38], pancreatic ductal adenocarcinoma [39], breast cancer [40], esophageal squamous cell carcinoma [41], endometrial cancer [42], germ cell tumors [43] and EOC [21,22]. The study reported here was based on a large, homogeneous cohort of 183 patients, all given standard TCG-based chemotherapy. IHC showed that CXCL12 abundance was not correlated with any of the clinical parameters tested or with the HER2 status. Patients with CXCL12 high -producing tumors had a PFS and OS similar to those of patients with CXCL12 low/moderate -producing tumors. Consistent with the findings of smaller cohorts of patients given heterogeneous treatments [21,22], we therefore suggest that there is no evidence that CXCL12 production by malignant epithelial ovarian cells is of prognostic significance in EOC. This lack of prognostic value for CXCL12 in EOC is somewhat puzzling, as this chemokine has been reported to enhance tumor cell proliferation and survival [16,[18][19][20]44], to promote angiogenesis [17], to inhibit the host immune response [15] and to mediate resistance to hyperthermic intraperitoneal chemotherapy [45], which may favor tumor growth and spread. This apparent paradox may be explained by the cellular expression of CXCL12 not providing a true reflection of its bioavailability, which depends principally on the presence in the tumor microenvironment of factors capable of disrupting CXCL12 from glycosaminoglycans [46]. Moreover, CXCL12 activity may be mediated by two receptors, CXCR4 and CXCR7, and these receptors may also be rate-limiting elements. For many years, CXCL12 and CXCR4 were thought to act as an exclusive non redundant pair. However, the recent identification of RDC1/CXCR7 as a second receptor for CXCL12 has challenged this view, and we now need to determine the respective contributions of CXCR4 and CXCR7 to the homeostatic and pathological activities of CXCL12 [ [47][48][49]. The emerging possibility that CXCR7 acts as a decoy receptor provides further support for a potential role in EOC. Finally, the lack of influence of CXCL12 may reflect the unusual characteristics of metastases, predicting the occurrence of which is one of the major challenges in efforts to improve the clinical outcome of EOC. By contrast to breast cancer, in which distant metastases to the liver, lung and bone marrow are favored by high levels of CXCL12 expression in target organs and lower levels within the tumor [50], EOC spreads by the direct seeding of tumor cells into the peritoneal cavity, with preferential metastasis to local lymph nodes. In EOC, CXCL12 mRNA and protein have been detected mostly in the tumor cells themselves, and this feature has been reported for other cancers, including follicular lymphoma, pancreatic cancer, glioma and astrocytoma [10]. Ovarian epithelial tumor cells constitute a potent source of CXCL12. CXCL12 may therefore retain tumor cells at the site of production, rather than encouraging them to disseminate and to form secondary tumors in organs at some distance from the original tumor. Conclusions Our findings highlight the previously unappreciated constitutive expression of CXCL12 by healthy epithelia of the ovary surface and the fallopian tubes, both these epithelia having been identified as probably sources of EOC. Thus, CXCL12 is expressed by epithelial cells before they become malignant. We also show that the level of CXCL12 expression in cancer cells is not a valuable prognostic factor in patients with advanced EOC. These findings do not exclude the possibility that CXCL12 contributes to tumor growth and spread via autocrine and/or paracrine action. There is therefore a need to determine whether CXCL12 status in EOC depends on its bioavailability and on the CXCR4/ CXCR7 ratio in tumor cells, which would support an effect of CXCL12. paclitaxel/carboplatin/gemcitabine; TMA: Tissue microarray; VEGF: vascular endothelium growth factor.
v3-fos-license
2015-07-16T14:03:08.000Z
2015-07-16T00:00:00.000
16192729
{ "extfieldsofstudy": [ "Biology", "Physics", "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevE.92.022701", "pdf_hash": "4d726b3cbf61d8f5b81375d747616ee830221e31", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:82", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "4d726b3cbf61d8f5b81375d747616ee830221e31", "year": 2015 }
pes2o/s2orc
Feeding ducks, bacterial chemotaxis, and the Gini index Classic experiments on the distribution of ducks around separated food sources found consistency with the `ideal free' distribution in which the local population is proportional to the local supply rate. Motivated by this experiment and others, we examine the analogous problem in the microbial world: the distribution of chemotactic bacteria around multiple nearby food sources. In contrast to the optimization of uptake rate that may hold at the level of a single cell in a spatially varying nutrient field, nutrient consumption by a population of chemotactic cells will modify the nutrient field, and the uptake rate will generally vary throughout the population. Through a simple model we study the distribution of resource uptake in the presence of chemotaxis, consumption, and diffusion of both bacteria and nutrients. Borrowing from the field of theoretical economics, we explore how the Gini index can be used as a means to quantify the inequalities of uptake. The redistributive effect of chemotaxis can lead to a phenomenon we term `chemotactic levelling', and the influence of these results on population fitness are briefly considered. INTRODUCTION In one of the more amusing, yet influential experiments on animal behavior, Harper [1] studied the distribution of mallards around two separated sources of standardized pieces of bread. After an induction period on the order of a minute, the average number of ducks clustered tightly around each station stabilized. The distribution he observed was simple: the number of ducks at each source was proportional to the flux of bread there (pieces/minute). This constituted the first experimental observation of the so-called ideal free distribution previously introduced in theoretical ecology. Using the terminology of Fretwell and Lucas [2], 'ideal' means that ducks can identify the source where their uptake is maximized, and 'free' implies unfettered ability to access the source of choice. This distribution, resulting from individual rational behaviors, achieves a population-wide uniformization of the probability of uptake, and can be understood as a Nash equilibrium [3]. These works impacted not only ornithology, but ecology [4,5], evolutionary biology [6] and the study of human behavior [7][8][9], all areas involving resource acquisition in a heterogeneous environment. Here we take motivation from Harper's experiment, and others discussed below, to examine resource acquisition in a heterogeneous microbial world [10] where swimming microorganisms respond to nutrient sources through concentration fields determined by molecular diffusion and microbial uptake. For the specific case of peritrichously flagellated bacteria such as E. coli and B. subtilis, cells move in a run-and-tumble random walk biased by concentration gradients, resulting in drift of the population up these gradients [11]. Because chemotaxis [12] is quite different from the visually-based searching of higher animals, and because of the diffusive behavior of nutrients and the cell populations the microbial problem is distinct in character. This feature motivates the present investigation of the consequences of a collection of individual chemotactic responses on the population-scale distribution of resources. While chemotaxis is generally thought to optimize uptake at the single-cell level [13], even the mere presence of translational diffusion in a population would lead to a distribution of uptake rates. And the interplay of chemotaxis and consumption will modify the distribution of resources and cells, with further potential impact on the uptake rate distribution. In this paper we focus on three key questions in this area: What is the distribution of bacteria around spatially distinct nutrient sources and their associated impact on the resource field? What is the distribution of resource uptake rates within that population? What are the consequences of such distributions for cellular fitness? A historically important experiment on spatiallyvarying resources is Engelmann's 1883 determination of the action spectrum of photosynthesis [14]. Having discovered bacteria that are attracted to the oxygen produced by photosynthesis [15], he imaged the solar spectrum onto a linear algal cell in an air-tight chamber containing such bacteria. They clustered around the alga in proportion to the local oxygen production, revealing with greater precision than the available techniques of the time the peaks of photosynthetic activity for blue and red wavelengths. How reliably the local bacterial accumulation reflects the oxygen production rate, in the face of both bacterial and oxygen diffusion, remains an open question in the spirit of the present investigation. Moreover, this work demonstrates how micro-domains releasing a limited quantity of attractive nutrients in a continuous fashion can arise in the microbial world. Understanding bacterial organization and uptake around algal resources also finds an important biological context in the case of bacterial-algal symbiosis. The recent discovery [16] that many algae dependent on vitamin B12 obtain it through symbiotic relationships with bacteria raises questions about spatio-temporal aspects of symbiosis: how the two species find each other and arrange themselves to achieve the symbiosis. These are examples of a more general problem of microorganisms responding to the 'patchy' nature of nutrients in ecosystems [17], such as the prosaically named 'marine snow' [18]. We emphasize the fundamental difference between live microbial sources, such as Engelmann's alga, and inert sources, such as lysis events; the former continuously release nutrients at low rates. They are thus stable in time but can be significantly impacted by bacterial uptake. These characteristics makes them the natural microbial equivalent to the limited continuous sources considered by Harper in his animal experiments. To make concrete the interplay between production, consumption, diffusion, and chemotaxis we consider a generalized Keller-Segel (KS) model [19]. While the KS model is a well-established model that has found frequent application in the study of spatially extended microorganism populations [20], the biological setting of localized, low-intensity sources with a steady release of nutrients is little-studied, and the overarching issue of resource uptake distribution is essentially unexplored. In what follows the steady-state distributions arising from multiple localized sources are analyzed to understand the consequences of chemotaxis on the distribution of uptake in the bacterial population, the total uptake being fixed. Borrowing from theoretical economics, we next propose that the Gini index [22], a number originally used to characterize wealth inequality, can be used to quantify the individual uptake distribution. Varying the model parameters and the dimensionality of space we show that chemotaxis can switch from redistributing the resource to generating greater inequalities. Finally, we explore the potential biological consequences of uptake redistribution through an example of growth at low nutrient levels. THE MODEL Consider bacteria with mean concentration b 0 and local concentration b(r, t) in a d-dimensional volume L d . Within the volume are nutrient sources with fluxes {φ i } of typical value φ 0 leading to a concentration field c(r, t). b and c obey KS equations [19], with D c and D b the diffusion coefficients of nutrients and bacteria respectively, and χ(c) the chemotactic response coefficient. The nutrient uptake rate f (c) per bacterium is expected to behave at low c as f (c) ∼ kc, with saturation at high c: f (c) ∼ k max . When the nutrient is essential for life, such as oxygen for obligate aerobes, f (c) may vanish below some c * [24,25]. As in Harper's study and Engelmann's experiment, the interesting regime has the resource limiting, with b 0 and L such that the total nutrient flux can be consumed and is thus below the maximum value k max b 0 L d . Otherwise, steady state cannot be attained and c increases indefinitely. For small c, we identify the length scale k = (D c /kb 0 ) 1/2 for concentration gradients due to uptake. For the a run-and-tumble chemotaxis mechanism to operate and the continuum model to be relevant, k should be large compared to the run length run = vτ , where v is a typical swimming speed and τ the time between tumbles. That is, we require the Knudsen-like number l run /l k 1. For E. coli, with v ∼ 20 µm/s and τ ∼ 1 s, run ∼ 20 µm, but it can be considerably longer for other bacteria [28]. Still at low c, with χ(c) = χ 0 [21], interesting behavior occurs when the chemotactic flux exceeds unity. A one-dimensional version of Harper's experiment ( Fig. 1(a)) has fluxes φ l and φ r at the left and right domain boundaries, and Φ = φ l + φ r . Nondimensionalizing as above we obtain in the low-c regime where δ = D b /D c , with the following nondimensional boundary conditions: wheren is the outward unit normal to the domain. Two new parameters drive the evolution of the system: λ = L/ k , the domain size relative to the screening length, and the relative strength s = φ l /Φ of the left source. Before investigating the steady-state solutions to (4) and (5), we point out that by construction the total uptake U over the population is equal at steady-state to the total nutrient flux into the chamber, independent of the choice of parameters and strength of chemotaxis. Thus overall uptake optimization is not part of the present study: our interest resides instead in how the individual behaviors result in a distribution of finite resource among population members, just as in Harper's experiment. One of the natural questions to ask is whether there is a variational structure to the KS equations (4) and (5). The only related result of which we are aware concerns the case when the nutrient consumption rate f (c) takes the aforementioned high-c form of a constant. After suitable rescaling that dynamics is where ν = −1. As shown recently [26], a dynamics of a closely related form is variational. If we introduce the energy functional then the variational relations yield (9) and (10), but with ν = +1. This case corresponds to the well-studied situation in which the bacteria are sources for the chemoattractant, rather than sinks. Because the same term (−bc) in E yields both the production/consumption term in (9) and the chemotactic term in (10), the case ν = −1 appears not to have any variational structure of this type [27]. But since the consumption rate per bacterium is constant in this regime, the distribution of resource acquisition rates is trivial and a variational structure would provide no new information. More importantly, the case under consideration here, with f (c) ∼ c also appears not to possess a variational structure and thus it is not possible to conclude that the steady-state solutions are in any way minimizers or maximizers of some energy-like functional. It follows that the distribution of uptake rates is a nontrivial feature of the underlying diffusion-consumption-chemotaxis dynamics. Figures 1(b) and 1(c) show steady state distributions of b and c for α = 3, λ = 3, δ = 0.6 and s = 1/8, which correspond to physical values of the dimensional param- [29,30]), along with the non-chemotactic case and the case α = 7. Bacteria accumulate on both sides, with more closer to the stronger source, as one might expect. Intriguingly, this leads to what we term chemotactic levelling of the nutrient: a more uniform concentration field than without chemotaxis. In particular, we notice that the maximum uptake rate of a bacterium in this population, obtained closest to the strongest source, is decreased by chemotaxis. Diffusion of b and c precludes the ideal free distribution. QUANTIFYING INEQUALITIES WITH THE GINI INDEX Whereas studies on biological consequences of chemotaxis usually measure the increased uptake over the whole or part of the population with respect to the nonchemotactic case (e.g. [31,32]), as emphasized above, the present study is focused on the case for which the mean uptake rate is independent of the chemotactic behavior. As our interest then resides in how equally resources are spread among the population, comparison of the chemotactic results with the non-chemotactic reference distributions thus requires a measure of the proximity with the ideal free distribution. Among the many possible measures of inequality [33], we consider here the Gini index G ∈ [0, 1] [22], which for a distribution P (w) of wealth w in a population can be expressed as The ideal free distribution, in which every individual has the same wealth u 0 , is P = δ(u − u 0 ) and thus G = 0, while larger values hold for more unequal distributions. G can be used with any notion of 'wealth' [34], such as biodiversity [35]. We should empasize that in using the Gini index to quantify uptake inequalities in the present study we do not imply any preferred status to G as a metric for resource acquisition distributions. There are many that could be explored, the Gini index presenting the advantage of being easily translated in the context of the continuous distribution and thus enabling analytical work on its variations. Using the individual nutrient uptake rate as wealth, we can transform the integrals in the uptake rate levels into integrals in space. This makes the bacterial density appear as the equivalent in space of the frequency distribution of uptake levels. Normalizing this new density function, thus making the integral for the total number of bacterial cells appear at the denominator, we finally re-express (13) for time-dependent spatial distributions in a domain Ω: With this measure of inequality, we investigate how the uptake distribution at steady-state depends on the parameters of the KS model (4)- (5). Before proceeding we should make clear that the Gini index values calculated within the present approach and the corresponding inequalities in the uptake levels are instantaneous: physically, bacteria would swim along a biased random walk inside the steady-state nutrient distribution, thus sampling different concentration levels. Over a time long in comparison with the typical time of bacterial diffusion at the scale of the experimental chamber this motility would tend to level the integrated uptake within the population and yield lower values of G. The fundamental issue is then whether the time scale for this smoothing-out of inequalities is large or small compared to the timescale τ int for a relevant internal biological process based on nutrient uptake. The present approach is thus in the limit τ int D b /L 2 1, thus most relevant to large system sizes and short internal times. As an example, it takes approximately 17h for a typical run-and-tumble bacteria (D b = 4.10 −6 cm 2 .s −1 ) to explore the space between two sources separated by 5 mm, a time that is much bigger than the typical scale of key cellular processes such as division (approx. 30 minutes). We now go back to investigating the role of each pa-rameter: the ratio of diffusion coefficients δ impacts transient dynamics but does not modify steady state solutions. From numerical solutions in the phase space delimited by s ∈ [0, 0.5] (from one source on the right to equal sources), α ∈ [0, 15] (strength of chemotaxis) and λ ∈ [0.5, 15] (domain size), we obtain first the intuitive result that for given chemotactic and domain size parameters, the more equal the sources are, the more equal the uptake is among the population, and the lower is G. More balanced sources indeed create smaller nutrient gradients, thus a lesser range of uptakes and weaker chemotaxis. We also find that G increases with the size parameter λ (Fig. 2 (a) in the case s = 0), for larger λ corresponds to stronger variations of the concentration field and higher variations of uptake. The question of whether chemotactic levelling of the nutrient concentration field can make the distribution of individual uptake more ideal is addressed by varying α. Its impact is subtle: chemotaxis levels nutrients across the domain, but accumulation of cells near sources improves uptake of some to the detriment of others. We find that G actually decreases with α ( Fig. 2(a) for a single source), which reveals chemotactic levelling of the uptake rate among the population. In Fig. 1, G decreases from 0.3 with no chemotaxis to 0.25 when chemotaxis (α = 3) is allowed. This decrease is best understood for a single source on one side of the domain. Because c decreases monotonically from the source, the bacterial population can be split into quantiles of uptake that are ordered in space. Chemotaxis lowers the nutrient concentration over the whole domain, but mostly close to the source, and it shifts the center of mass and mean uptake level of each quantile. Fig. 3a show that bacteria with the higher uptake, which are also closer to the source, are transferred to lower uptake levels. For the lower uptake quantiles, higher levels of uptake compared to the non-chemotactic case are attained due to chemotaxis toward the source. Together, these effects bring uptake levels closer to the average, lowering G: the bacterial system moves closer to the ideal free distribution. The generality of this result can be established in the limit of weak chemotaxis (α 1) still with a single source, where a series solution G G 0 (λ) + α G 1 (λ) + · · · of (4) and (5) yields where G 1 (λ) < 0, so G indeed decreases with chemotaxis. A cumbersome analysis (not shown) for the general case s ∈ [0, 0.5], that is G G 0 (λ, s) + α G 1 (λ, s) + · · · , also shows that G 1 (λ, s) < 0, thus extending this result to any ratio of source stengths. In the limit of strong chemotaxis (α 1), where bacterial diffusion becomes irrelevant, we may expect to recover the ideal free distribution. Analytical progress on this steady-state problem is achieved by integrating twice (5) to obtain b(c), substituting into (4) and then expanding in powers of 1/α. One obtains where ω and β 1 are determined through The constant β 2 depends on ω and the model parameters. Neglecting terms in 1/α 2 and higher order, it can be written as This solution enables us to obtain an analytical expression for the Gini index in the limit of strong chemotaxis: to leading order in 1/α, for s ∈ [0, 0.5]. This establishes further the generality of its decrease with α together with its increase with λ: in 1D, chemotaxis levels the uptake throughout the population. Moreover, in this range of high α, inequalities of uptake initially increase when the system changes from a single source to more balanced sources. As this levelling of the uptake distribution appears as the microbial equivalent of the more uniform uptake displayed by ducks, it is natural to ask if, as in Harper's experiments, the bacterial population reaches the ideal free distribution as α → ∞ and splits into two localized sub-populations proportional to the source strengths. If we consider that the position x 0 of minimum bacterial concentration separates a left population B L associated to the left source and its right-hand-side equivalent B R , our analytical solution for α 1 directly yields Thus, the population associated to one source is directly proportional, to leading order, to the flux of this source, as in the ideal free distribution. Moreover, analysis of (18) shows that in the limit α → ∞, b(x) is localized in regions of width ∼ 1/ √ α at both x = 0 and x = λ, with peak values ∼ α at these positions. In the limit of α → ∞, we thus get a localisation of the number of cells proportional to the source at the source: this is the (unphysical) limit of a microbial ideal free distribution. When the domain size parameter λ 1, the central portion of the domain has a steady-state concentration c ∼ 0, with very small gradients. Bacteria there are screened from the sources, unable to feel sufficient gradients to move chemotactically closer to them. The relative redistributive effect of chemotaxis compared to its absence must then diminish with distance. Considering the relative Gini index G(α)/G 0 in the approximation (15)-(16), we indeed find an optimal domain size, λ G 3.12, for which the redistributive effect of chemotaxis is the strongest. An optimal size is also found in simulations beyond the linear regime in α (Fig. 2 (b)) and with influx from both sides, with λ G 3. The decrease of this relative change for high values of λ embodies the aforementioned screening, while the behavior at low λ results from a nearly uniform concentration over the domain, with only weak gradients for a chemotactic response. Does the uptake levelling found in d = 1 hold in higher dimensions? To answer this, we solve (4) and (5) for a single spherical source of radius l in a closed spherical domain of radius L, both measured in units of l k . We find that in d = 2 and 3 the effects of chemotaxis, for a given size of source and domain, are much weaker. Moreover, for certain parameter values, chemotaxis can actually increase G (inset in Fig. 3 (b)). Analysis of the quantiles (Fig. 3b for a two-dimensional example) shows that in these cases, even though bacteria closest to the source have a lowered uptake, a majority of the bacteria that are already above the average uptake in the non-chemotactic case gain access to even higher uptake. Bacteria furthest from the source, and below the average uptake level in the non-chemotactic case, see their mean uptake decrease even further. Overall this corresponds to an increase of G: in higher dimensions, chemotaxis can bring the bacteria further away from the ideal free distribution, that is, it increases the inequalities among the population. The increase or decrease of inequalities of uptake, as revealed by the positive or negative change of G, may thus depend in detail on the system characteristics embodied in l, L and α. IMPLICATIONS FOR FITNESS What would be the biological consequences of chemotactic levelling of resources and of uptake rates? Uptake of nutrients governs a wide range of bacteriological processes, among which are cell growth and division. In particular, the yield of biomass per unit nutrient taken up is an increasing function of the available nutrient concentration, a feature which has been suggested as selective for the response characteristics of chemotaxis [13]. Here we provide a brief discussion of how the chemotacticallydriven redistribution of resources throughout a population can impact on the average growth rate of the population which we consider a measure of fitness. Continuing the point of view taken in the introduction, we show that while at the single cell level in a defined resource field chemotaxis may increase fitness, this is not necessarily true at the population level. We compute an average growth rateμ over the population from the steady-state distributions of (5), in a model in which the local growth rate µ(x) is proportional to the local uptake rate through a yield function y(c): Whereas the mean uptake is fixed by the boundary conditions in our problem, this average growth coefficient will depend on the distribution of the resource among the bacteria and thus will be modified by chemotaxis. In order to capture the increase of y with c, we consider that y = 0 below of threshold concentration c min , and adopt a Michaelis-Menten form above it [37]: with K a saturation constant of the yield (Fig. 4a). The threshold concentration c min can be considered as the limit below which all the uptake is directed toward maintenance costs [36]. The relative change of the average growth rate due to chemotaxis (μ chemo −μ nonchemo ) is shown in Fig. 4b. We observe that for lower values of the threshold concentration c min the redistributive effect increases the number of cells that reach the growth threshold: the population fitness becomes higher with chemotaxis than without. However, for higher values of the threshold concentration c min , the redistribution of uptake throughout the population leaves more cells below the growth threshold: we get the counter-intuitive result that chemotaxis effectively lowers population fitness with respect to the non-chemotactic case. This result, which stems from the competition of the bacteria for the same resource, shows that in the context of continuous sources, an homogeneously chemotactic population could be selected against due to the dilution of a scarce resource resulting from the chemotactic behavior. CONCLUSIONS We have shown that the organization of bacteria around localized nutrient sources is fundamentally different from that of higher animals due to diffusion of resources and feeders. Yet, there are still common characteristics. First, what might be termed 'foraging' behavior decreases the maximum uptake rate through competition for the resource; in the bacterial case this corresponds to a decrease of the maximum of the concentration field with chemotaxis. Second, foraging generates, quite evidently, localization and accumulation of the population closer to the resources. But whereas the conjunction of these two phenomena brings Harper's ducks to the ideal free distribution, it may fail in the microbial world: it brings the system closer or further from this ideal distribution depending on the spatial dimensionality and parameters capturing the strength of chemotaxis, the size of the resources and the distance between them. The redistribution of uptake is not without consequences: when the resource is scarce in comparison to the metabolic needs, chemotaxis effectively dilutes it and reduce the average population fitness. The issues addressed here suggest experimental studies of model systems in physical ecology for which in situ measurements of local metabolic activity and nutrient concentration fields are possible. Optically-based quantitative measures of photosynthetic activity [38], probes of local oxygen concentration [39], and local mass spectrometry [40] are examples of relevant techniques. Microbial communities in biofilms, sediments [41], and algae sustaining a motile population of bacteria around them by releasing oxygen [42] represent interesting systems in which to study the distribution of uptake rates.
v3-fos-license
2019-07-10T13:02:58.409Z
2019-07-09T00:00:00.000
198179778
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.1.033098", "pdf_hash": "74f5d926c3cd407f941f451cdc7391484ccd1e21", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:85", "s2fieldsofstudy": [ "Physics" ], "sha1": "af45ee79c40f4ecaea256de5261ee36b0e1ffebd", "year": 2019 }
pes2o/s2orc
Single-photon pump by Cooper-pair splitting Hybrid quantum dot-oscillator systems have become attractive platforms to inspect quantum coherence effects at the nanoscale. Here, we investigate a Cooper-pair splitter setup consisting of two quantum dots, each linearly coupled to a local resonator. The latter can be realized either by a microwave cavity or a nanomechanical resonator. Focusing on the subgap regime, we demonstrate that cross-Andreev reflection, through which Cooper pairs are split into both dots, can efficiently cool down simultaneously both resonators into their ground state. Moreover, we show that a nonlocal heat transfer between the two resonators is activated when opportune resonance conditions are matched. The proposed scheme can act as a heat-pump device with potential applications in heat control and cooling of mesoscopic quantum resonators. For large intradot Coulomb interactions, U, and superconducting gap, |∆| → ∞, the proximity of the superconductor causes a nonlocal splitting (and recombination) of Cooper pairs into both dots with the pairing amplitude Γ S > 0. The corresponding Andreev bound states |± are a coherent superposition of the dots' singlet, |S , and empty state, |0 . The dots are further tunnel-coupled to normal contacts, which are largely negative-voltage-biased with respect to the chemical potential µ S = 0 of the superconductor. In this configuration, due to single-electron tunneling, the singlet state decays at rate Cooper-pair splitter consisting of two quantum dots coupled to a common superconductor (S) and two normal-metal contacts (α = L, R). Each dot is capacitively coupled to a local resonator with frequency ω α . (b) At large bias voltage, incoherent tunneling events at rate Γ lead to a decay of the singlet state, |S , via a singly-occupied one, |ασ (σ =↑, ↓), to the empty state, |0 , whereby |0 and |S are coherently coupled with amplitude Γ S . (c) The latter coupling leads to the formation of hybridized |± states of energy splitting δ. For weakly hybridized states |0 and |S , the transitions |± ↔ |ασ are strongly asymmetric. (d) Photon transfer cycle occurring around the resonance, δ ≈ ω L − ω R , with the effective coupling strength λ NL . Γ into a singly-occupied state, |ασ (α = L, R and σ =↑, ↓) and further into the empty state, see Fig. 1(b). For large dot onsite energies Γ S , the charge hybridization is weak (|+ ≈ |S , |− ≈ |0 ), and the transitions |+ → |ασ and |ασ → |− are faster than the opposite processes, 79 see Fig. 1(c). This asymmetry in the relaxation ultimately explains how to pump or absorb energy within a single mode, and how to transfer photons between the cavities. In the latter case, when the energy splitting δ between the Andreev bound states is close to the difference of the cavity frequencies, the relevant level structure of the uncoupled system is summarized in Fig. 1(d). We show below that the effective interaction couples the states arXiv:1907.04308v3 [cond-mat.mes-hall] 13 Nov 2019 |+, n L −1, n R +1 and |−, n L , n R , where n α indicates the Fock number in the resonator α. An electron tunneling event favours transitions |+ → |ασ → |− conserving the photon number. When the system reaches the state |− ≈ |0 , this coherent cycle restarts. When the system is in |+ , it can again decay. During each cycle, a boson is effectively transferred from the left to the right cavity. Since the two cavities are not isolated, but naturally coupled to external baths, a steady heat flow is eventually established between the cavities. The effect discussed above refers to a single operation point of the system. More generally, using a master equation approach, we show that the interaction between the CPS and the two resonators opens a rich set of inelastic resonant channels for the electron current through the dots, involving either absorption/emission of photons from a local cavity or nonlocal transfer processes. By tuning to match these resonances, the CPS acts as a switch allowing the manipulation of heat between the resonators. Each resonant process can be captured with good approximation by an effective Hamiltonian which is valid close to the resonance and generalizes the mechanism described above. This work is structured as follows. After introducing our model and the employed master equation in Sec. II, we provide therein an effective Hamiltonian describing local and nonlocal transport processes. In Section III, we discuss the possibility of simultaneous cooling (and heating) of the resonators. Section IV is dedicated to the nonlocal photon transfer between them, and in Sec. V we analyze the efficiency of this transfer. Finally, we draw our conclusions in Sec. VI. II. COOPER-PAIR SPLITTER COUPLED TO RESONATORS We consider the effective model for two single-level quantum dots proximized by a s-wave superconductor, and each linearly coupled to a local harmonic oscillator. For large intradot Coulomb interaction, U | |, the subgap physics of the system is described by the effective Hamiltonian 28,32,[80][81][82][83][84][85][86] where = 1. Here, d ασ is the fermionic annihilation operator for a spin-σ electron in dot α, with the corresponding number operator N ασ and onsite energy . The interaction of the dot with the α-oscillator of frequency ω α and corresponding bosonic field b α is realized through the charge term, with coupling constant λ α . The relevant subspace of the electronic subsystem is spanned by six states: The empty state |0 , the four singly-occupied states |ασ = d † ασ |0 and the singlet state . Triplet states and doublyoccupied states are inaccessible due to large negative voltages, see Fig. 1(a), and large intradot Coulomb repulsion. Finally, in the subgap regime, the superconductor can only pump Cooper pairs, which are in the singlet state. The states |0 and |S are hybridized due to the Γ S -term, yielding the Andreev states |+ = cos(θ/2)|0 + sin(θ/2)|S and |− = − sin(θ/2)|0 + cos(θ/2)|S , with the mixing angle θ = arctan[Γ S /( √ 2 )]. We denote their energy splitting by δ = √ 4 2 + 2Γ 2 S . Electron tunneling into the normal leads and dissipation for the resonators can be treated in the sequential-tunneling regime to lowest order in perturbation theory, assuming small dotlead tunneling rates, Γ Γ S , k B T, and large quality factors Here, κ α is the decay rate for the α-resonator and T is the temperature of the fermionic and bosonic reservoirs. The fermionic and bosonic transition rates between two eigenstates |i and | j of Hamiltonian (1) are given by Fermi's golden rule, 87 denotes the energy difference between two eigenstates. We use the notation d (−) ασ (d (+) ασ ) for fermionic annihilation (creation) operators, and correspondingly b (±) α for the bosonic ones. The populations P i of the system eigenstates obey a Pauli-type master equation of the form 28,88,89 which admits a stationary solution given by P st i . The total rates entering Eq. (4) are given by w j←i = α,s (w α,s el, j←i + w α,s ph, j←i ). As mentioned before, we assume the chemical potentials of the normal leads µ α = −eV to be largely negative-biased, i.e., U, |∆| eV k B T, , Γ S , with V > 0 and e > 0 denoting the applied voltage and the electron charge, respectively. In this regime, the electrons flow unidirectionally from the superconductor via the quantum dots into the leads; the temperature of the normal leads becomes irrelevant, and the rates w α,+ el, j←i vanish. Under these assumptions, the stationary electron current through lead α is simply given by I α = eΓ σ N ασ . For a symmetric configuration, as assumed here, both stationary currents coincide, I L = I R . To evaluate the stationary current and the other relevant quantities, we diagonalize numerically Hamiltonian (1), and build the transition-rate matrices appearing in Eq. (4). The stationary populations, P st i , are then found by solving the system of Eqs. (4) for P i = 0. In order to explain our numerical results, we perform the Lang-Firsov polaron transformation to Hamiltonian (1). [90][91][92] For an operator O, we define the unitary transformationŌ = The polaron-transformed Hamiltonian reads then with¯ α = − λ 2 α /ω α and X = exp( α Π α ). 93 Equation (5) contains a transverse charge-resonator interaction term to all orders in the couplings λ α . Intriguingly, this coupling has a purely nonlocal origin stemming from the cross-Andreev reflection. By expanding X in powers of Π ≡ α Π α assuming small couplings λ α ω α , and moving to the interaction picture with respect to the noninteracting Hamiltonian, we can identify a family of resonant conditions given bȳ with p, q nonnegative integers, as discussed in Appendix A. Here,δ = √ 4¯ 2 + 2Γ 2 S is the renormalized energy splitting of the Andreev states due to the polaron shift, with Around the conditions stated in Eq. (6), a rotating-wave approximation yields an effective interaction of order p + q in the couplings λ α . Hereafter, we discuss in detail the resonances atδ = ω L = ω R andδ = ω L − ω R corresponding to one-and two-photon processes, respectively. They can be fully addressed by expanding X up to second order in λ α /ω α and subsequently performing a rotating-wave approximation, see Appendix A. III. SIMULTANEOUS COOLING AND HEATING Forδ = ω L = ω R , one can achieve simultaneous cooling as well as heating of both resonators, which is already described by the first order terms in λ α of Eq. (5). Here, we consider two identical resonators and tune the dot levels around the resonance conditionδ ≈ ω α , i.e.,¯ = ± √ ω 2 α − 2Γ 2 S /2. The effective first-order interaction Hamiltonian reads after a rotating-wave approximation as we show in Appendix A. The operators τ + = |+ −| and τ − = |− +| describe the hopping between the two-level system formed by the states |+ and |− , coupled to the modes through a transverse Jaynes-Cummings-like interaction. The effective coupling is proportional to sinθ = √ 2Γ S /δ, and, thus, a direct consequence of the nonlocal Andreev reflection. The effective interaction in Eq. (7) coherently mixes the three states |+, n L , n R , |−, n L + 1, n R , and |−, n L , n R + 1 which are degenerate for H loc = 0. When | | Γ S , the hybridization between the charge states is weak. The sign of changes the bare dots' level structure: For < 0, |+ ≈ |0 and |− ≈ |S , whereas for > 0, |+ ≈ |S and |− ≈ |0 . In the latter case, the chain of transitions |+ → |ασ → |− is faster than the opposite process, see Fig. 1(c). For < 0, energy is pumped into the modes. Conversely, for > 0, we can achieve simultaneous cooling of the resonators. In Fig. 2, we show the stationary electron current I α [calculated using the full Hamiltonian (1)], together with the average photon numbern α = b † α b α of the corresponding resonator, as a function of . The broad central resonance of width Γ S corresponds to the elastic current contribution mediated by the cross-Andreev reflection. The additional inelastic peak at negative is related to the emission of photons in both resonators atδ ≈ ω α . At finite temperature, a second sideband peak emerges at positive , where the resonators are simultaneously cooled down. The cavities are efficiently cooled into their ground state for a wide range of values of Γ S , as can be appreciated in the inset of Fig. 2(b). The optimal cooling region is due to the interplay between the effective interaction with the resonator-which vanishes for small Γ S -and the hybridization of the empty and singlet state, which increases as approaches the Fermi level of the superconductor and reduces the asymmetry of the transitions |± ↔ |ασ . IV. NONLOCAL PHOTON TRANSFER By keeping terms up to second order in λ α in Eq. (5), we can describe the resonances aroundδ = ω L − ω R andδ = ω L + ω R . Assuming without loss of generality ω L > ω R , a rotating-wave approximation yields the effective interaction terms H (−) These terms show that the two resonators become indirectly coupled through the charge states, with the strength We remark that this interaction is, as well, purely nonlocal. |+, n L − 1, n R − 1 with |−, n L , n R , through which photons at different frequencies are simultaneously absorbed (emitted) from (into) both cavities. Conversely, the term H (−) NL describes processes by which the superconductor mediates a coherent transfer of photons between the resonators, by coupling the subspaces |+, n L − 1, n R + 1 and |−, n L , n R , see Fig. 1(d). Notice that this effect vanishes if the two resonators are of the same frequency, as it would requireδ = 0 and, thus, Γ S = 0. In Fig. 3(a), we report the electronic current, again calculated with the full interaction, assuming two different resonator frequencies. In addition to the sideband peaks close toδ = ω L andδ = ω R , we can identify higher-order multiphoton resonances (e.g.,δ = 2ω R , where the cooling cycle involves the absorption of two photons from the same cavity) which can be described in a similar way with a rotating-wave approximation, see Appendix A. Moreover, we observe the second-order peaks described by H (±) NL which are responsible for processes involving both resonators. The inset of Fig. 3(c) reports the average occupation of the resonators in the vicinity of the resonanceδ = ω L − ω R , where the right mode is heated and the left one is cooled. The shape of these resonances differs from the first-order peaks (which are well approximated by Lorentzians): We show in Appendix A how the second-order Hamiltonian contains indeed an additional term proportional to sin(θ)(2n α + 1)τ z , which causes both a small frequency shift for each resonator (yielding a double-peak structure) and a small renormalization of the splittingδ between the Andreev bound states. Nevertheless, this corrections do not alter the main physics captured by H (−) NL . V. HEAT TRANSFER AND EFFICIENCY To quantify the performance of both cooling and nonlocal photon transfer, we calculate the stationary heat current 66,67,87 flowing from the bosonic reservoir α to the corresponding resonator. It is negative (positive) when the resonator is cooled (heated), and vanishes for an oscillator in thermal equilibrium. As a figure of merit for local cooling, we can estimate the number of bosonic quanta subtracted from the resonator on average per unit time, and compare it to the rate at which Cooper pairs are injected into the system. The latter rate is given by |I S |/2e with I S = −(I L + I R ) being the Andreev current through the superconductor found from current conservation. Consequently, the local cooling efficiency aroundδ = ω α can be defined as Similarly, aroundδ = ω L − ω R , we define the heat transfer efficiency Figures 3(b) and (c) show η (L) loc and η NL , respectively, as a function of close to the corresponding resonances. In both cases, we obtain high efficiencies close to 90%: Approximately one photon is absorbed from each cavity (local cooling) or transferred from the left to the right cavity (nonlocal transfer) per Cooper pair. The efficiency is essentially limited by two factors: (i) an elastic contribution to the current [the broad resonance of linewidth ∝ Γ S in Figs. 2(a) and 3(a)] where electrons flow without exchanging energy with the cavities; (ii) a finite fraction of the injected electrons acting against the dominant process (cooling or photon transfer), as illustrated by the dashed blue arrows in Fig. 1(d). Both processes augment with increasing Γ S and are a byproduct of the finite hybridization between the empty and the singlet state which, however, is crucial for achieving a nonzero efficiency. VI. CONCLUSIONS We have analyzed a CPS in a double-quantum-dot setup, with local charge couplings to two resonators. We have demonstrated that Cooper-pair splitting can generate a nonlocal transfer of photons and heat from one oscillator to the other, resulting in a stationary energy flow. Such energy flows can also be channeled to cool or heat locally a single cavity. Hence, our system constitutes a versatile tool to fully inspect heat exchange mechanisms in hybrid systems, and is a testbed for quantum thermodynamics investigations involving both electronic and bosonic degrees of freedom. Due to the single-photon nature of the coherent interactions, this can also be extended to achieve few-phonon control and manipulation, 94,95 e.g., by implementing time-dependent protocols for the dots' gate voltages to tune dynamically the strength of the nonlocal features. Further practical applications include high-efficiency nanoscale heat pumps and cooling devices for nanoresonators. A discussion on the experimental feasibility of our setup is in order. For single quantum dots coupled to microwave resonators, λ α /(2π) can reach 100 MHz, with resonators of quality factors Q∼10 4 and frequencies ω α /(2π)∼7 GHz. 49,51 For mechanical resonators, coupling strengths of λ α /(2π)∼100 kHz for frequencies of order ω α /(2π)∼1 MHz and larger quality factors up to 10 5 −10 6 have been reported. 76 In a doublequantum-dot Cooper-pair splitter setup, the cross-Andreev reflection rate is approximately Γ S √ Γ SL Γ SR , when the distance between the dots is much shorter than the coherence length in the superconducting contact. 36 Here, Γ Sα is the local Andreev reflection rate which can reach several tens of µeV, becoming comparable to the typical microwave resonator frequencies (thus allowing Γ S ω α ) while being order of magnitudes lower than the superconducting gap ∆. 96 Therefore, the regime of parameters we considered lies within the range of stateof-the-art technological capabilities. Moreover, experiments involving Cooper-pair splitters 7-17 or mesoscopic cQED devices with microwave cavities 40,42,46,51,54,[72][73][74] and mechanical resonators 43,75,76,78 are of appealing and growing interest, and therefore promising candidates for the implementation of the system described here. This research was supported by the German Excellence Initiative through the Zukunftskolleg and by the Deutsche Forschungsgemeinschaft through the SFB 767. R.H. acknowledges financial support from the Carl-Zeiss-Stiftung. Appendix A: Polaron-transformed Hamiltonian and effective nonlocal interaction We report here the derivation of the effective interactions that explain the local cooling or heating, and the nonlocal photon transfer mechanisms. The starting point is the polarontransformed Hamiltonian given in Eq. (5) of the main text. For small coupling strengths λ α ω α , we expand the operators X and X † up to second order in λ α . The dots-cavities interaction term is with iΠ = α iΠ α the generalized total momentum, σ x = |0 S| + H.c. and σ y = −i|0 S| + H.c. The σ x -term describes tunneling between the empty and the singlet state due to the superconductor, and is already present in Hamiltonian (1) of the main text. Diagonalizing the bare electronic part leads to the hybridized charge states with the mixing angleθ and the energy splittingδ defined in the main text. By introducing the Pauli matrices τ We now move to the interaction picture with respect to the noninteracting Hamiltonian H 0 = ασ¯ α N ασ +δ 2 τ z + α ω α b † α b α . By recalling the definition of Π, we obtain in the interaction picture the Hamiltonian Here, we have introduced Ω = ω L + ω R and ∆ω = ω L − ω R . Hamiltonian (A5) contains all the terms that lead to cooling, heating, and nonlocal photon transfer. To isolate these features, we will focus on the relevant resonancesδ ≈ ω α ,δ ≈ Ω, and δ ≈ ∆ω. First, let us consider two identical resonators of frequency ω α = ω and tune such thatδ = ω. Notice that this can be fulfilled by two values of , of opposite sign. In the following, we restrict Eq. (A5) to first order in λ α , and then discard the fast-oscillating terms by performing a standard rotating-wave approximation (RWA). Thus, we obtain the time-independent interaction Hamiltonian given by Eq. (7) in the main text, We have used here the resonance condition ω =δ and the relation sinθ = √ 2Γ S /δ. Let us now consider the nonlocal resonance,δ = ∆ω. A peculiarity is here, that we have to go to second order in λ α , since the first-order terms become in the RWA fast rotating and, thus, average to zero. The corresponding effective Hamiltonian reads with n α = b † α b α the photon number operator, and λ NL stated in Eq. (8) of the main text. The second term corresponds to the interaction H (−) NL (main text), and is responsible for the coherent transfer of photons between the cavities, leading to a stationary energy flow. The first term in Eq. (A7) proportional to n α τ z can be seen as a dispersive shift of the cavity frequencies, which depends on the Andreev bound state. As the quantities reported in Fig. 3 of the main text are averages calculated from the density matrix, this translates into a fine doublepeak structure of the nonlocal resonance, see Fig. 3(c) of the main text. Further, the additional term proportional to τ z renormalizes the level splittingδ and, therewith, the resonance condition,δ = ∆ω. Considering the conditionδ = Ω, we obtain the effective From the last line of Eq. (A5), one can infer an effective RWA Hamiltonian governing the resonance conditionδ ≈ 2ω α . It is similar to Eq. (A6), but involves absorption and emission of two photons from the same cavity. Indeed, this two-photon resonance is also observable in Fig. 3(a) of the main text and yields cavity cooling for > 0 and heating for < 0, respectively. By including terms up to n-th order in Π in Eq. (A4), one obtains terms (b α ) n and (b † α ) n , which, after moving to the interaction picture and performing a suitable RWA, will yield n-photon local absorption/emission processes. The expansion contains also terms of the form (b † α ) p (bᾱ) q and (b † α ) p (b †ᾱ ) q together with their Hermitian conjugates, with p + q = n (ᾱ = R if α = L and vice versa). The former terms describe the coherent transfer of |p − q| photons between the cavities, while the latter describes coherent emission and re-absorption of p and q photons from the α andᾱ cavity, respectively. The general (approximate) resonance condition thus readsδ ≈ |pω L ± qω R |, stated in Eq. (6) in main text. If either p or q is zero, the resonance corresponds to local cooling/heating of the cavities.
v3-fos-license
2023-01-17T17:48:21.434Z
2023-01-01T00:00:00.000
255903537
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2305-6320/10/1/12/pdf?version=1673522473", "pdf_hash": "209f9bc0284c2dc5146f3cc7eefc1d614304fba2", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:88", "s2fieldsofstudy": [ "Medicine" ], "sha1": "671fd760f0141c19cf2008af39e4a058788d926b", "year": 2023 }
pes2o/s2orc
Evaluating Cataract Surgical Rate through Smart Partnership between Ministry of Health, Malaysia and Federal Territory Islamic Religious Council Introduction. Cataract is the leading cause of blindness. About 90% of cataract blindness occurs in low- and middle-income countries. The prevalence of blindness and low vision in any country depends on the socioeconomic status, the availability of medical and healthcare facilities, and the literacy of the population. Aim: This paper aims to estimate the cataract surgery rate (CSR) at Pusat Pembedahan Katarak, MAIWP-Hospital Selayang (Cataract Operation Centre), and provide descriptive assessments of the patients who received eye treatments in the center. Methods: The data were retrieved from the clinical database from 2013 to 2016. Information on the patient’s sociodemographic and clinical and treatment history was collected. Results: The cataract surgery rate for 2013 was about 27 and increased to 37.3 in 2014. However, it declined to 25 in 2015 before it resumed to 36 in 2016. For female patients who received eye treatments at Pusat Pembedahan Katarak, MAIWP-Hospital Selayang, the rate was higher (53.7%) compared to male patients (46.3%). The mean duration of cataract surgery from 2013 to 2016 was 21.25 ± 11.071 min. Conclusion: The increased cataract surgery rate for MAIWP-HS through smart partnerships for day care cataract surgery proved that better accessibility makes the short- and long-term strategies for the reduction and prevention of blindness in Malaysia possible to achieve. Introduction The World Report on Vison (WRV), which was released by the World Health Organization (WHO), stated that 2.2 billion people across the world have a vision impairment. Of these, a minimum of 1 billion have a preventable vision impairment [1]. This report has suggested that the delivery of universal eye health coverage (UEHC) can be optimized through an integrated, people-centered eye care (IPCEC). Therefore, the identification and development of a national health delivery system is crucial to improve the quality and quantity to prevent and address vision impairment [2]. The majority (80%) of blindness can be prevented. There are many causes of blindness among adults and children. For adults, cataract has been identified as one of the leading causes of blindness and has affected more than 18 million people. About 95% of blindness from cataract occurs in low-and middle-income countries (LMICs) and accounts for 51% Medicines 2023, 10, 12 2 of 10 of the total cases of blindness [3]. Among children, blindness can be due to measles and a vitamin A deficiency [4]. The major causes of blindness vary widely across the continents, which is related to the level of socio-economic development across the countries, the availability of medical and healthcare facilities and the literacy of the population. In Malaysia, the most common causes of blindness and low vision are cataract, refractive errors, glaucoma, diabetic retinopathy and retinopathy of prematurity [5]. A survey conducted by the University of Malaya Medical Centre among the urban population in Kuala Lumpur Federal Territory in 2014 [6] and Klang Valley in 2008 revealed the prevalence of cataract at 32.9%, glaucoma as 23.4% and diabetic retinopathy at 9.6% [7]. The Malaysia National Eye Survey conducted in 2014 found that age and gender adjusted the prevalence of blindness, severe visual impairment and moderate visual impairment, by 1.2%, 1.0% and 5.9%, respectively. The most common causes of blindness were untreated cataract (58.6%), diabetic retinopathy (10.4%) and glaucoma (6.6%). The majority of the causes of blindness (86.3%) were avoidable, and 58.6% of the causes of blindness were treatable [8]. Lens extraction surgery is the only effective method to treat cataract and is the most commonly performed ocular procedure in the world. With the increase in safety and improvements in visual outcomes, cataract surgery using intraocular lens implantation is now increasingly performed for the treatment of other conditions, including refractive error and angle closure glaucoma [9][10][11]. Cataract surgery has evolved from manual cataract extraction to phacoemulsification using an ultrasonic technique and laser-assisted cataract surgery that brings more precise incision, fewer complications and a better visual outcome. Therefore, cataract surgery requires a greater cost, highly trained people to perform the surgery and longer learning curves [12]. Presently, cataract surgery is available in both public and private hospitals in Malaysia. The One of the strategies of the MAIWP is to strengthen the efforts to improve the welfare of the people residing within the Federal Territories and increase the assets belonging to the council through investments and other halal ventures for the benefit of the people. Pusat Pembedahan Katarak, MAIWP-Hospital Selayang (Cataract Operation Centre, MAIWP-Hospital Selayang) which started operations on 16 January 2013, is a dedicated center for cataract surgery and is the only successful center in Southeast Asia that was developed from a strategic initiative and inter-agency collaboration between the MAIWP and the Ministry of Health (MOH), Malaysia, through the Selayang Hospital, with the aim to help underprivileged patients get medical attention and treatment. This initiative is a unique and dynamic national strategy platform that brings together ministries, agencies, all levels of government and the private sectors on a voluntary basis. The design and selection of this initiative are based on two key principles, namely, delivering a high income through economic growth and integrated development and enhancing the level of public well-being through greater security as well as social inclusion. Therefore, this initiative will be able to close the social distance between various groups in Malaysian society, for example, rural vs. urban, young vs. old, or men vs. women [13]. Subsequently, the MAIWP distributed almost MYR 10 million to renovate their building premises into a surgical center equipped with sophisticated and up-to-date surgical equipment. The Selayang Hospital is situated in the suburbs of the Gombak district in Selangor, with a population of almost one million [14]. Selangor is a state on the west coast of Peninsular Malaysia with almost seven million people in the total population, encircling Kuala Lumpur, the capital of Malaysia ( Figure 1). It comprises of nine districts: Gombak, Klang, Kuala Langat, Kuala Selangor, Petaling, Sabak Bernam, Sepang, Hulu Langat and Hulu Selangor. There are four major cities in Selangor, namely, Shah Alam, Klang, Kajang, Petaling and Subang Jaya, which are defined as urban areas, while the other districts are in the suburbs. through economic growth and integrated development and enhancing the level of public well-being through greater security as well as social inclusion. Therefore, this initiative will be able to close the social distance between various groups in Malaysian society, for example, rural vs. urban, young vs. old, or men vs. women [13]. Subsequently, the MAIWP distributed almost MYR 10 million to renovate their building premises into a surgical center equipped with sophisticated and up-to-date surgical equipment. The Selayang Hospital is situated in the suburbs of the Gombak district in Selangor, with a population of almost one million [14]. Selangor is a state on the west coast of Peninsular Malaysia with almost seven million people in the total population, encircling Kuala Lumpur, the capital of Malaysia ( Figure 1). It comprises of nine districts: Gombak, Klang, Kuala Langat, Kuala Selangor, Petaling, Sabak Bernam, Sepang, Hulu Langat and Hulu Selangor. There are four major cities in Selangor, namely, Shah Alam, Klang, Kajang, Petaling and Subang Jaya, which are defined as urban areas, while the other districts are in the suburbs. There are many public hospitals that perform cataract surgery in Selangor, including the Selayang Hospital, Shah Alam Hospital, Klang Hospital, Sungai Buloh Hospital and Serdang Hospital. However, Selayang Hospital is the only primary referral center with four other subspecialties in ophthalmology. The hospital received a high number of patients to undergo cataract surgery. It is a public hospital funded by the Malaysian government. Therefore, all resources, including human resources, salaries and other expenditures, are covered by the government. The doctors receive training through various master's programs at public or private universities, locally or internationally, before being awarded with a master's degree in their respective fields. They have to undergo an extensive one-year training and subspecialty training according to their chosen fields after the master's program. A master's in ophthalmology is one of the training programs for the doctors who are interested in pursuing a career in this area. They will be given continuous training during the master's program and will be assigned to the specialists expert in the field. Due to the lack of space and the high number of patients received in Selayang Hospital, cataract patients have to wait up to a year before surgery can be performed. This hospital has two fully functioning operating theatres, which includes all other cataract care pathways under one roof. The ophthalmology clinic and other surgical clinics sessions open daily from Monday to Friday. Therefore, all the scheduled routine cataract surgeries are performed in Pusat Pembedahan Katarak, MAIWP by the ophthalmologists from Selayang Hospital to shorten the waiting period and increase the cataract surgery There are many public hospitals that perform cataract surgery in Selangor, including the Selayang Hospital, Shah Alam Hospital, Klang Hospital, Sungai Buloh Hospital and Serdang Hospital. However, Selayang Hospital is the only primary referral center with four other subspecialties in ophthalmology. The hospital received a high number of patients to undergo cataract surgery. It is a public hospital funded by the Malaysian government. Therefore, all resources, including human resources, salaries and other expenditures, are covered by the government. The doctors receive training through various master's programs at public or private universities, locally or internationally, before being awarded with a master's degree in their respective fields. They have to undergo an extensive one-year training and subspecialty training according to their chosen fields after the master's program. A master's in ophthalmology is one of the training programs for the doctors who are interested in pursuing a career in this area. They will be given continuous training during the master's program and will be assigned to the specialists expert in the field. Due to the lack of space and the high number of patients received in Selayang Hospital, cataract patients have to wait up to a year before surgery can be performed. This hospital has two fully functioning operating theatres, which includes all other cataract care pathways under one roof. The ophthalmology clinic and other surgical clinics sessions open daily from Monday to Friday. Therefore, all the scheduled routine cataract surgeries are performed in Pusat Pembedahan Katarak, MAIWP by the ophthalmologists from Selayang Hospital to shorten the waiting period and increase the cataract surgery rates with high-quality service and good outcomes among all the scheduled cataract surgery cases. Cataract surgery is funded by MAIWP, whereas Selayang Hospital provides the service, including human resources and medical services, including the follow up after surgery [15]. If the incidence of cataract is higher than the total number of cataract surgeries, this will lead to an increased backlog of people who require cataract surgery. With the establishment of the Pusat Pembedahan Katarak, MAIWP-Hospital Selayang, the waiting period for cataract patients to undergo surgery has been reduced from sixteen to two weeks. Ethical Approval This study was registered under the Malaysia National Medical Research Registry (NMRR) with the identification number NMRR-19-3151-51710 and was funded by the MOH operational budget. The MOH, Malaysia, allows the use of secondary data from either registry or hospital clinical databases, provided that the data are anonymized. Therefore, the data were de-identified prior to analysis. Data Collection Since 2013, with the implementation of government policies on smart partnership, data for all cataract patients who had operations in Hospital Selayang were recorded in a clinical database using a standardized information form called a "Registration Form", which contains information on the following: patient's name, gender, age, ethnicity, home address, cataract classification, detailed ocular examination, such as eyelid anatomy and inflammation, presence of abnormalities in the cornea, fundal examination for retina abnormalities, biometry test, naked eyesight, visual acuity, intra-ocular pressure, surgeon's name, surgical management, whether they have had intraocular lens (IOL) implantation or not and operation outcome (mainly 1st-3rd postoperative day corrected visual acuity (VA)). All surgeons who performed the cataract surgeries must fill out the registration form by the 10th day of the following month. For this study, records were retrieved from the clinical database from Hospital Selayang, Selangor, Malaysia. A total of 2266 patients underwent cataract surgery at Pusat Pembedahan Katarak, MAIWP-Hospital Selayang from 2013 to 2016. All the patients' information was retrieved from the clinical database. Data from all cataract patients were extracted from the data registration form. The information from the registration forms were transferred to the local Eye Clinic Management System (ECMS), which synchronizes with National Eye Database (NED) at regular intervals. NED is a database supported by MOH. It is an eye health information system that contains a clinical database consisting of six patient registries and a monthly ophthalmology service census. The patient registries are Cataract Surgery Registry, Diabetic Eye Registry, Contact Lens-Related Corneal Ulcer Surveillance, Glaucoma Registry, Retinoblastoma Registry, Retinoblastoma Registry and Age-Related Macular Degeneration Registry. Definition This paper used the WHO definition of blindness, low vision and visual impairment. Blindness was defined as presenting a visual acuity of less than 3/60 or as the inability to count fingers at a distance of three meters in the better eye using the available means of correction (with spectacles when available). Low vision was defined as presenting a visual acuity of less than 6/18 but with an equal or greater score of 3/60 in the better eye when using the available means of correction (with spectacles when available). Visual impairment was defined as presenting a visual acuity of less than 6/18 in the better eye when using the available means of correction (with spectacles when available). Cataract was defined as the presence of lens opacity, with a grey or white appearance of the pupil when examined with an oblique light in a shaded or darkened area. Refractive errors were defined as visual impairment that improved to 6/18 or better with a pinhole, with no evidence of cataract in a torchlight examination. Retinal diseases were defined as retinal abnormalities caused by dystrophy, degeneration or acquired metabolic causes, such as diabetes mellitus. Glaucoma was defined as the presence of the horizontal cup-disc ratio of 0.4 or more along with an intraocular pressure of more than 22 mm Hg. Corneal diseases were defined as loss of normal corneal transparency due to whatever causes involving the central cornea. The cataract surgery rate (CSR) was defined as the number of cataract surgeries per million people per year, and it is a critical index to demonstrate that cataract blindness is becoming eliminated. Data Analysis An analysis was performed using the Statistical Package for Social Science, version 26.0 (SPSS, Inc., Chicago, IL, USA) for Windows. The patients' characteristics were summarized for the entire sample using the mean values and standard deviations (SDs) for continuous variables and frequencies and percentages for categorical variables. Results In tandem with the MOH strategy to address the issue of cataract blindness in this country, 1576 patients underwent cataract surgery in 2013, which increased to 2255 in 2014. However, the number decreased to 1543 in 2015. A slight increase can be seen between 2013 and 2016, with the number of cases in 2016 amounting to 2266. The mean age for cataract patients in 2013 to 2016 was 64.57 ± 8.436 years old. The majority (70%) of patients who went for cataract surgery were 60 years and above, each year between 2013 to 2016. Patients below 45 years old contribute cumulatively to about 1% of all the cataract surgeries performed between 2013 and 2016. Overall, the mean duration of a cataract surgery for each patient was 21.25 ± 11.071 min between 2013 to 2016. In term of gender, more than 50% of the patients were female in all four years of this study. Patients were seen on a day care basis, and the majority (95%) went for the phacoemulsification technique. From 2013 to 2016, about 42.2% of the patients who received treatment were Malay, 39.7% were Chinese, 15.4% were Indian and 2.7% were of other races. Almost 100% of the patient went for IOL implantation instead of another choice (Table 1). At the beginning of the partnership, the CSR for 2013 was 26.69, as was observed in 2015. However, in 2016, the CSR increased to 36.02 ( Table 2). The percentage of phacoemulsification surgeries performed in the center increased from 95% to almost 100%. From 2013 to 2016, the IOL implantation rate was greater than 98%, and more than 75% of patients had a 1st-3rd postoperative day corrected VA ≥ 0.3. Discussion The increment in cataract surgery rates from 2013 to 2016 for the MAIWP-HS indicated a progress and demonstrates the increasing availability of eye healthcare in Malaysia. CSR is a critical index to demonstrate that cataract blindness is being eliminated. The rate is higher in well-developed and high-income countries. For example, Japan recorded 10,198 cases per million people in 2013, and Australia recorded 7202 cases per million people in 2014. However, the rate is still very low in some parts of Asia. For example, China recorded 1402 cases per million people in 2015, and Indonesia recorded 1411 cases per million people in 2014 [16,17]. In Malaysia, the CSR is still low. For example, in 2014, 1397 cases per million people were recorded [17]. There are many factors associated with the CSR, such as the awareness, prevalence of cataract, accessibility to health facilities, the racial and genetic make-up and the density of ophthalmologists. However, a country with a high economic level but a low prevalence of cataract is unlikely to produce a high CSR [18][19][20]. The most common factors for a poor uptake of cataract surgery were 'awareness' (43.5%) and 'fear of surgery or poor outcome' (16.2%), respectively [17]. A study conducted by Lee MY et al., showed that the number of cataract surgeries performed across Malaysia as a day care service increased from 4887 (39.3%) in 2002 to 14,842 (52.3%) in 2011. This trend may be due to the improvement of the surgical capacity in government hospitals and the change in practice patterns among ophthalmologists [12]. Our results also showed that female patients constituted more than 50% of all cataract surgery patients each year, and the age-adjusted rate of cataract surgery (per 100,000 inhabitants) has been significantly higher for females than for males in the age group between 60 and 79 years old. The differences in gender may be due to a higher prevalence of cataract in female subjects [21]. This finding is similar to a study conducted in China, which showed that females had a significantly higher prevalence of visual impairment than men [22]. The risk of cataract increases with age [23]. Therefore, as women have a longer life expectancy than men, this may contribute to their high cataract burden [24]. According to the study by Resnikoff et al., women are more likely to have a visual impairment compared to men in every region of the world [25]. Olofsson, et al. found that female subjects who had cataract surgery also visited the healthcare provider for other reasons more frequently than the male subjects [26]. This may also contribute to the higher rate of surgery for female subjects. Additionally, a study suggested that estrogen may play a role in the incidence of cataract among women. A decrease in estrogen during menopause may cause an increased risk of cataract in women. However, it may not be due the concentration of the hormone, but to the withdrawal effect [27]. Phacoemulsification is the preferred technique for cataract surgery in Malaysia. It has increased significantly from 39.7% in 2002 to 78.0% in 2011. On the other hand, extracapsular cataract extraction (ECCE) has dropped from 54.0% in 2002 to 17.3% in 2011. Phacoemulsification contributed toward about two thirds of the total cataract surgeries completed at the MOH hospitals in 2011. This may be due to the financial cost, as a significant number of hospitals did not have phacoemulsification machines in the early 2000s (12). Studies by Thevi, Reddy and Shantakumar in Pahang and Soundarajan et al. in Malaysia concluded that the visual outcome was significantly better with phacoemulsification compared to the ECCE procedure (p = 0.001), and they recommended that phacoemulsification equipment should be supplied to the district hospitals with adequate facilities, to assist with the performance of intraocular surgery [6,28]. According to Yorston, apart from the cost of surgery and IOL, the lack of awareness, poor service and long distances from surgical centers were the largest hindrances of cataract surgery in developing countries [29]. A study conducted in Shanghai showed that many patients in suburban areas were not willing to have cataract surgery, even though the cataract surgery services were easily available, and the authors suspected that it may be due to the lack of awareness [30]. Therefore, a program must be developed to raise awareness in the suburban population, to improve people's knowledge, understanding and willingness to accept cataract surgery as an intervention to reduce reversible blindness. Increased rates of cataract surgeries in private hospitals in recent years may also contribute to the CSR in government facilities. Patients are more willing to pay for cataract surgery despite the surgery's actual cost, due to the high-quality service, accessibility and convenience [31]. The healthcare service in Malaysia is divided into two highly developed sectors. The main provider is the MOH, which is funded through the general taxation of income, and the second provider is via the private sector, which has grown substantially over the last 30 years. Public healthcare provides a service to the majority (65%) of the population but is served by just 45% of all registered doctors, and even fewer specialists. Therefore, patients only need to pay minimum fees in the heavily subsidized public sector. However, the nominal fees in this public system are only applicable for Malaysian nationals. Foreigners are eligible for public healthcare, or they can choose a private healthcare facility with additional fees. Private healthcare offers a similar service to public healthcare at a higher cost, but the services are much faster and more comfortable because of the higher numbers of doctors and hospitals in this sector. However, the quality of the staff or equipment between the public and private sectors is similar [32]. The cases of blindness caused by cataract are increasing globally by approximately one million per year, and the cases of reversible blindness caused by cataract with a visual acuity of less than 6/60 are increasing by 4-5 million per year. Therefore, to reduce the backlog of cataract cases, it is necessary to increase the uptake of cataract surgery. It is Medicines 2023, 10, 12 8 of 10 possible to achieve these rates if high-quality cataract surgery is performed at a reasonable cost and close to where people live, using the cataract operation center, MAIWP-HS, as a model. This model, as an outreach center, has now been developed in several countries, most prominently in India [33,34]. The CSR can be increased by implementing a better policy for patients from poor economic backgrounds. Thus, it is important to identify the income source for patients to enable them to receive the service at a minimal cost. Other provisions include transportation to pick up patients, drive them to the health facility, perform the surgery and then send them back home afterwards. In these situations, more patients who have low incomes and difficulty commuting will accept cataract surgeries. This will ensure the delivery of high volume, good quality and low-cost cataract surgical services. Advocating with colleagues in other health sectors and with governments and private healthcare services will maximize the delivery of the essential eye care for those from marginalized groups and will achieve the goal of Vision 2020 [35]. There are a few limitations in this study. One of the limitations is that the data for the long-term follow-up of cataract surgery patients is impossible to retrieve, due to improper record keeping. Another limitation is that the outcome of cataract surgery was only measured from the 1st-3rd postoperative day on the corrected VA, without the longterm post-operative outcome of cataract surgery. The CSR data does not incorporate other significant outcomes, such as the sight restoration rate or the cataract surgical coverage among the cataract blind. Furthermore, the CSR data, prior to the setting of the center, were not available for Selangor. Conclusions The increase in the cataract surgery rate from 2013 to 2016 for the MAIWP-HS as a reference center for day care cataract surgery is a start, and it is going to help decisionmakers achieve the short-and long-term strategies for the reduction and prevention of blindness in Malaysia. More awareness programs on cataract surgery in the suburban population are needed to improve knowledge and awareness with regards to cataract and its complications. Informed Consent Statement: Informed consent was obtained from all patients involved in the study prior to start of treatment and surgery. Data Availability Statement: The data that support the findings of this study are available upon request from the corresponding author, NAM. The data are not publicly available due to information that could compromise the privacy of patients.
v3-fos-license
2023-11-17T05:21:43.803Z
2023-11-01T00:00:00.000
265218784
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://doi.org/10.1098/rsos.231100", "pdf_hash": "558f5cb73bf470caa4ea0910f7c4c1613cad2cdf", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:89", "s2fieldsofstudy": [ "Philosophy" ], "sha1": "558f5cb73bf470caa4ea0910f7c4c1613cad2cdf", "year": 2023 }
pes2o/s2orc
An epistemology for democratic citizen science More than ever, humanity relies on robust scientific knowledge of the world and our place within it. Unfortunately, our contemporary view of science is still suffused with outdated ideas about scientific knowledge production based on a naive kind of realism. These ideas persist among members of the public and scientists alike. They contribute to an ultra-competitive system of academic research, which sacrifices long-term productivity through an excessive obsession with short-term efficiency. Efforts to diversify this system come from a movement called democratic citizen science, which can serve as a model for scientific inquiry in general. Democratic citizen science requires an alternative theory of knowledge with a focus on the role that diversity plays in the process of discovery. Here, we present such an epistemology, based on three central philosophical pillars: perspectival realism, a naturalistic process-based epistemology, and deliberative social practices. They broaden our focus from immediate research outcomes towards cognitive and social processes which facilitate sustainable long-term productivity and scientific innovation. This marks a shift from an industrial to an ecological vision of how scientific research should be done, and how it should be assessed. At the core of this vision are research communities that are diverse, representative, and democratic. Introduction The way we do science and the role of science in society are rapidly changing.It has been a long time since science was carried out by a small and exclusive elite of independently In the context of our discussion here, we are most interested in a kind of citizen science that lies between these two extremes.In particular, we are interested in projects that actively involve a broad range of participants in project design, data analysis, and quality monitoring, with the triple aim of generating new scientific knowledge, of teaching participants about science, and finding solutions to a local or regional problem.We take this kind of democratic, participatory, social-movement-based or socially engaged citizen science as an ideal worth aspiring to (as do others; see [11][12][13][14], but also [4] for a more critical assessment).More generally, we believe that it serves as a good model for the kind of reforms we need for the democratization of scientific research in general, beyond the specific domain of citizen science. Much has been written about the historical, political and sociological aspects of democratic citizen science (e.g.[12,14,15]).It differs significantly from traditional academic research in its goals, values, attitudes, practices and methodologies.It is a bottom-up approach that aims to democratize research processes through deliberative practices (see §5).Apart from its focus on the process of inquiry, democratic citizen science has a number of obvious advantages when considered from a political or ethical point of view.It not only taps into a large base of potential contributors [9,16], generally incurring a relatively low amount of costs per participant, but also attempts to foster inclusion and diversity in scientific communities (see [5] for a critical discussion), opens a channel of communication between scientists and nonscientists, and provides hands-on science education to interested citizens.Democratic citizen science can help to address the problems of undone science-important areas of inquiry which are neglected due to competing political agendas [17]-and of epistemic injusticeinequalities in the accessibility and distribution of scientific knowledge [18].It aims to bring scientific knowledge to all those who most urgently need it, rather than only those few who provide the bulk of the funding.Its open design is intended to increase the reproducibility, adequacy, and robustness of the scientific results it generates, and to promote collaboration over competition in the process of inquiry.Last but not least, with its bottom-up approach, it challenges the hierarchical nature of scientific knowledge, which has often been taken for granted ever since discussions about science and democracy first arose (e.g.[19,20]). All these benefits, of course, rely on the implementation and monitoring of procedures and protocols that ensure good scientific practice, management and data quality control.Other challenging aspects of democratic citizen science are its relatively low per-person productivity (compared to that of full-time professional researchers who generally require less instruction and supervision), and an increased complexity in project management-especially if citizen scientists are not merely employed for data collection, but are also involved in project design, quality monitoring as well as the analysis and interpretation of results.At the same time, this complexity and deliberative nature holds benefits such as reduction of mistakes, ability to replicate and verify information and design, and use of transdisciplinary insights that come from participants. Beyond these practical considerations, there is a more philosophical dimension to democratic citizen science that has received surprisingly little attention so far (see [13,14,16,[21][22][23] for a number of notable exceptions).It concerns the theory of knowledge, the kind of epistemology able to describe, analyse and support the efforts of democratic citizen science.In other words, to assess the practicality, usefulness, ethics, and overall success of democratic citizen science, we need to take seriously the kind of knowledge it produces, and the way by which it produces that knowledge.It is this largely unexamined epistemological aspect of citizen science that we want to analyse in this paper. To precisely pinpoint and highlight the differences between knowledge production by democratic citizen science compared to traditional academic research, we make use of an argumentative device: we present an epistemology ideally suited for citizen-science projects of the democratic kind by contrasting it with a very traditional view of scientific epistemology.Our intention is not to build a straw man argument, or to paint an oversimplified black-and-white picture of (citizen) science.We are very well aware that the epistemic stances of many scientists and citizens are much more sophisticated, nuanced, and diverse than those depicted here (e.g.[24]).However, even though the philosophy of science may have moved on, many practicing scientists and stakeholders of science still do retain remnants of a decidedly old-fashioned view of science, which we will call naive realism (ibid.).In most cases, this view is not explicitly formulated in the minds of those who hold it and its assumptions and implications remain unexamined.Nor does this view amount to a consistent or systematic philosophical doctrine.Instead, naive realism consists of a set of more or less vaguely held convictions, which often clash in contradictions, and leave many problems concerning the scientific method and the knowledge it produces unresolved.Yet, somehow, these ideas tenaciously persist and hold a firm grip on what we-as communities of scientists, stakeholders and citizens-consider to be the epistemic goal and the societal role of scientific research. royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 It should be quite clear that the persistence of naive realism is not a purely theoretical or philosophical problem.One of its major theoretical implications is that it treats the whole world as a machine, an engineered clockwork, to be understood in purely formal and mechanistic terms.One of its major practical implications concerns the way we assess the success of research projects (see [25] for an historical overview).What we value crucially depends on how we define the epistemic (and nonepistemic) goals of science, and what we consider high-quality scientific knowledge.We will argue below that naive realism leads to a system of incentives which is excessively focused on misguided notions of accountability and short-term productivity-in particular, the efficient generation of measurable research output [26].We could call this the industrial model of doing science, since it treats research as a system of mechanical production, which must be put under tight, top-down control. In such an industrial system, projects of democratic citizen science are at a fundamental disadvantage.Standard assessment practices do not do justice to the diversified ways by which such projects generate knowledge and other benefits for the participants and stakeholders involved [27,28].Even more importantly, democratic citizen science cannot compete with traditional academic science in terms of production efficiency, mainly due to its large organizational overhead, but also because the efficient production of knowledge is often not its only (or even primary) goal.All of this implies that merely encouraging (or even enforcing) inclusive and open practices, while generating technological platforms and tools to implement them, will not be sufficient to propel citizen science beyond its current status as a specialized niche product-often criticized, belittled or ignored by commentators and academic researchers for its lack of rigour and efficiency.This is a serious problem, which is philosophical down to its core, and therefore calls for a philosophical solution.In order for citizen science to succeed beyond its current limitations, we need a fundamental reexamination of the nature and purpose of scientific knowledge, and how it is produced.In particular, we need to move beyond our increasing obsession with productivity metrics in science.Simply put, we require a new model for doing research, with corresponding procedures for quality control, that is more tolerant and conducive to diversity and inclusive participation (see also [12,29]). In what follows, we outline an epistemology of science, which is formulated explicitly with our discussion of democratic citizen science in mind.It is centred around three main philosophical pillars (figure 1).The first is perspectival realism (also called scientific perspectivism), providing an alternative to naive realism which is appropriate for the twenty-first century [30,31].The second is process philosophy, in the form of naturalistic epistemology, which focuses our attention away from knowledge as the product, or final outcome, of scientific research, towards the cognitive processes underlying knowledge production [32][33][34].The third and final pillar is deliberative practice, with its focus on social interactions among researchers, which yields the surprising insight that we should not always reach for consensus in science [35].These three pillars tightly intertwine and combine into a new model, which we could call the ecological model of doing science, because-just like an ecosystem-it is centred around diversity, inclusion, interaction, self-organization and robustness, in addition to (long-term) productivity.This model is based on a completely different notion of accountability, leading to process-oriented, participatory, and integrated assessment strategies for scientific projects that go far beyond any predefined narrow set of metrics to measure research output.We highlight specific royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 procedures that enable us to adaptively monitor how these epistemic pillars are applied, and conclude our discussion with a number of concrete suggestions on how to implement such strategies in practice. Naive realism and the cult of measurable productivity What we mean here by naive realism is a form of objectivist realism that consists of a loose and varied assortment of philosophical preconceptions that, although mostly outdated, continue to shape our view of science and its role in society.The central tenet of naive realism is that the main (and only) epistemic goal of science is to find objective and universal Truth.The ideas supporting this popular notion are drawn from three main historical sources: the logical positivism of the Vienna Circle, Popper's falsificationism (somewhat ironically, as we shall see), and Merton's sociology of science.Positivism in general, and empirical or logical positivism in particular, hold that information derived from sensory experience, interpreted through reason and logic, forms the source of all well-founded knowledge (e.g.[36][37][38]).The logical positivists asserted that meaningful discourse is either purely analytic (in the formal sciences, such as logic and mathematics) or empirically testable (in the natural and social sciences).Everything else is cognitively meaningless, in particular what became labelled as 'metaphysics': abstract philosophical theory that has no basis in reality.This is still reflected in the 'I have facts, and therefore do not need any philosophy' attitude of many current-day scientists. At the heart of positivism lies the principle of verification: scientific hypotheses are positively confirmed by empirical evidence, which comes in the form of condensed summaries of direct observations, where all terms are defined ostensively, i.e. in an obvious and unambiguous manner.This firmly anchors scientific knowledge in objective reality, but it demands an exceptional degree of clarity, detachment, and objectivity on the part of the observer.The fact that human beings may not be able to achieve such detached, objective clarity was acknowledged by several logical empiricists themselves.Even our most basic observations are coloured by mood and emotions, biased assumptions, and all the things we already know. In the meantime, Karl Popper-probably the world's best-known philosopher of science-revealed an even more serious and fundamental problem with verification: he showed that it is impossible, amounting to a logical fallacy (an affirmation of the consequent; e.g.[37,39]).By contrast, Popper argued that it is possible to falsify hypotheses by empirical evidence.Therefore, the only way to empirically test a scientific conjecture is to try to refute it.In fact, if it is not refutable, it is not scientific.This part of Popper's argument still stands strong today, and because of it (logical) positivism has become completely untenable among philosophers of science. The doctrine of falsificationism may be the most widely held view of science among practising researchers and members of the wider public today.However, unknown to most non-philosophers, it has a number of problems and some very counterintuitive implications.First of all, falsificationism is completely incompatible with positivism, even though both views often coexist in the minds of naive realists.In Popper's view, scientific hypotheses stand as long as they have not yet been falsified, but they are never confirmed to be true in the sense of accurately reflecting some specific aspect of reality.Popper called this state on non-refutation verisimilitude, which literally translates as 'the appearance of being true'.Furthermore, falsificationism provides a rather simplistic account of how science actually works.In practice, scientific theories are rarely discarded, especially not if viable alternatives are lacking.Instead of refuting them, theories are often amended or extended to accommodate an incompatible observation.Quite often, scientists do not even bother to adjust their theories at all: paradoxical results are simply ignored and classified as outliers.'Good enough' theories often remain in use, even if better ones are available, as is the case in space exploration, which largely relies on Newtonian rather than relativistic mechanics.Finally, falsificationism has nothing to say about how hypotheses are generated in the first place.It turns a blind eye to the sources of scientific ideas, which remain a mystery, beyond philosophical investigation.In other words, it is deliberately ignoring the cognitive and social practices that are generating and processing scientific knowledge.Seen from this reductive angle, the creative aspects of science seem rather haphazard, even irrational, and the main role of the scientific method is a negative one: to act as a selection mechanism which objectively filters out yet another wrong or idiosyncratic idea. On top of a fluctuating mix of positivist and Popperian ideas, naive realism often incorporates a simple ethos of science that goes back to the work of sociologist Robert K. Merton [40].This ethos is based on four basic principles: (1) universalism-criteria to evaluate scientific claims must not depend on the person making the claim; (2) communism (or communality, for our American readers)-scientific royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 knowledge must be commonly owned once it is published; (3) disinterestedness-scientists must disengage their interests from their judgements and actions; and (4) organized scepticism-scientific communities must disbelieve, criticize, and challenge new views until they are firmly established.According to Merton, scientists who conform to his ethos should be rewarded, while those that violate it should be punished.In this way, the ethos ensures that science can fulfil its primary societal role: to provide a source of certified, trustworthy, objective knowledge. It should be evident-even from such a brief and cursory overview-that the ideas underlying naive realism do not form a coherent doctrine.Nor do they paint a very accurate picture of actual science, performed by actual human beings.In fact, the naive realist view is highly idealized: more about what science should be like in our imagination than about what it actually is.It provides a deceptively simple epistemological framework for an ideal science whose progress is steady, predictable, under our control.This is probably why it is still so attractive and influential, even today.Everybody can understand it, and it makes a lot of intuitive sense, even though it may not hold up to closer scrutiny.Its axiomatic nature provides an enticing vision of a simpler and better world than the complicated and imperfect one we actually live in.However, because of its (somewhat ironic) detachment from reality, there will likely be unintended consequences and a lack of adaptability if we allow such an overly simplistic vision to govern our way of measuring the success of science.Let us highlight some of the specific features of naive realism that lead to unforeseen negative consequences in science today. First of all, naive realism suggests that there is a single universal scientific method-based on logical reasoning and empirical investigation-which is shared by researchers across the natural and social sciences.This method allows us to verify, or at least falsify, scientific hypotheses in light of empirical evidence, independent of the aim or object of study.Considered this way, the application of the scientific method turns scientific inquiry itself into a mechanism, a purely formal activity.It works like an algorithm.If applied properly, scientific inquiry necessarily leads to an ever-increasing accumulation of knowledge that approximates reality asymptotically (figure 2).Because of our finite nature as human beings, we may never have definitive knowledge of reality, but we are undoubtedly getting closer and closer. Complementary to this kind of formalization, we have a universally accepted ethos of science, which provides a set of standards and norms.When properly applied, these standards and norms guarantee the validity and objectivity of scientific knowledge.Scientific method and practice become self-correcting filters that automatically detect and weed out erroneous or irrational beliefs or biases.In that sense, scientific inquiry is seen as independent of the identity or personality of the researcher.It does not matter who applies the scientific method.The outcome will be the same as long as its standards and norms are followed correctly.All we have to do to accelerate scientific progress is to crank up the pressure and increase the number of scientists (figure 2).Naive realism suggests that the universal scientific method leads to empirical knowledge that approximates a complete understanding of reality asymptotically (represented by exponential functions in this graph).Scientific progress does not depend in any way on the backgrounds, biases, or beliefs of researchers, which are filtered out by the proper application of the scientific method.According to this view, simply applying increased pressure to the research system should lead to more efficient application of the scientific method, and hence to faster convergence to the truth.See text for details. royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 This view has a number of profound implications: -It sees researchers (once properly trained to adhere to the scientific method and ethos) as completely replaceable. -It therefore fails to appreciate the diversity in researchers' experiences, motivations, interests, values, and philosophical outlooks.-It leads to the idea that scientific inquiry can be optimized based solely on quantitative measurement of the productivity of individual researchers. It is easy to see that all of these points are highly problematic, especially when considered in the context of democratic citizen science.Thus, a naive realist is better off without democratic citizen science, since there is no point in valuing the individual's motivations and point of view, there is no advantage in the diversity of citizen scientists, and it makes no sense to take into account a multiplicity of epistemic and non-epistemic goals beyond the efficient production of research output.All of these only lead to a loss of focus and slow traditional science down.Or do they? In reality, the simplistic view of naive realism outlined above leads to a veritable cult of measurable productivity [26], which is steering science straight into a game-theoretical trap.The short-term thinking and opportunism that is fostered in a system like this, where rare funding opportunities confer a massive advantage and heavily depend on a steady flow of publications with high visibility, severely limits creative freedom and prevents scientists from taking on high-risk projects.Ironically, this actually diminishes the productivity of a scientific community over the long term, since the process of scientific inquiry tends to get stuck in local optima within its search space.Its narrow focus on short-term gains lacks the flexibility to escape. What we need to prevent this dilemma is a less mechanistic approach to science, an approach that reflects the messy reality of limited human beings doing research in an astonishingly complex world [31].It needs to acknowledge that there is no universal scientific method.Scientific research is a creative process that cannot be properly formalized.Last but not least, scientific inquiry represents an evolutionary process combining exploitation with exploration that thrives on diversity (of both researchers and their goals).Not just citizen science, but science in general, deserves an updated epistemology that reflects all of these basic facts.This epistemology needs to be taught to scientists and the public alike, if we are to move beyond naive realism and allow democratic citizen science to thrive. Science in perspective The first major criticism that naive realism must face is that there is no formally definable and universal scientific method.Science is quite obviously a cultural construct in the weak sense that it consists of practices which involve the finite cognitive and technological abilities of human beings, firmly embedded in a specific social and historical context.Stronger versions of social constructivism, however, go much further than that.They claim that science is nothing but social discourse (see [41] for an historical overview).This is a position of relativism: it sees scientific truths as mere social convention, and science as equivalent to any other way of knowing, like poetry or religion, which are simply considered different forms of social discourse.We find this strong constructivist position unhelpful.In fact, it is just as oversimplified as the naive realist stance.Clearly, science is neither purely objective nor purely culturally determined. Perspectival realism [30,31,42] and, similarly, critical realism [43] provide a middle way between naive objectivist realism and strong forms of social constructivism.Both of these two flavours of non-naive realism hold that there is an accessible reality, a causal structure of the universe, whose existence is independent of the observer and their effort to understand it.Science provides a collection of methodologies and practices designed for us to gain trustworthy knowledge about the structure of reality, minimizing bias and the danger of self-deception.At the same time, perspectival realism also acknowledges that we cannot step out of our own heads: it is impossible to gain a purely objective 'view from nowhere' [44].Our access to the world, at all levels-from the individual researcher to the scientific community to the whole of society and humanity-is fundamentally biased and constrained by our cognitive and technological abilities, which we exercise under particular social and historical circumstances. Each individual and each society has its unique perspective on the world, and these perspectives do matter for science.To use Ludwik Fleck's original terms, every scientific community is a Denkkollektiv (thought collective) with its own Denkstil (thought style), which circumscribes the type and range of royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 questions it can ask, the methods and approaches it can employ, and the kinds of explanations it accepts as scientific [45].All of these aspects of inquiry have changed radically, time and again, throughout the history of philosophy and science, the most famous example being the transition from Aristotelian to Cartesian and then Newtonian styles of inquiry during the Scientific Revolution (see the open-source book [46] for an excellent overview).Our Denkstil is likely to evolve further in the future.In other words, there is no way to define science, or the scientific method, in a manner which is independent of social and historical context.Scientific inquiry is not formalizable in this way, and it never will be. At this stage of our argument, it is important to note that by 'perspective' we do not mean just any arbitrary opinion or point of view.Perspectivism is not relativism (see also [24]).Instead, perspectives must be justified.This is the difference between what Richard Bernstein [47] has called flabby versus engaged pluralism.In the words of philosopher William Wimsatt, perspectives are 'intriguingly quasisubjective (or at least observer, technique or technology-relative) cuts on the phenomena characteristic of a system' [31, p. 222].They may be limited and context-dependent.But they are also grounded in reality.They are not a bug, but a central feature of the scientific approach.Our perspectives are what connects us to the world.It is only through them that we can gain any kind of access to reality at all [48].Popper was right in saying that it is impossible to obtain absolutely certain empirical facts.Our knowledge is always fallible.But we can still gain empirical knowledge that is sound, robust, and trustworthy, at least up to a certain degree [31,49].In fact, science gives us knowledge of the physical world that is more robust than what we get from other ways of knowing.That is precisely its purpose and societal function.Let us elaborate a bit more. If scientific inquiry is not a purely formal activity, then scientific methods do not work like algorithms which are guaranteed to yield an ever-closer approximation to reality, no matter who is using them.Real science, performed by real scientists, does not actually aim to come up with a perfect explanation of everything, the whole of reality.Instead, researchers make use of imperfect(ible) heuristics-fallible short-cuts, improvisations that solve scientific problems (most of the time) in specific areas under specific circumstances [31,50].Herbert Simon called this down-to-Earth approach to problem-solving satisficing, contrasting it to the largely unachievable optimal solutions sought by the naive realist [51,52]: heuristics may not be perfect, but they allow us to reach our epistemic goals within a reasonable amount of time, energy, and effort.This is an utterly pragmatic view of science. Yet, science is not just problem-solving either.As Aristotle already recognized, the ultimate goal of inquiry is to supply us with a structured account of reality (see [32] for a contemporary discussion of this issue).This is possible, but not as easy as a naive realist might think.Being good at solving a particular problem does not automatically imply that a heuristic also teaches us something about the structure of reality.It could work for all the wrong reasons.How can we find out whether we are deceiving ourselves or not?In order to do this, we need to assess the robustness (or soundness) of the knowledge that a heuristic produces in a given context.Remember that empirical insights are never absolutely certain, but they can be robust if they are 'accessible (detectable, measurable, derivable, definable, producible, or the like) in a variety of independent ways' [31, p. 196].It is possible to estimate the relative robustness of an insight-what we could call perspectival truth-by tracking its invariance across perspectives, while never forgetting that the conditions that make it true always depend on our own present circumstances [49].Thus, multiple perspectives enhance robust insight, and a multiplicity of perspectives is what democratic citizen science provides.It is by comparing such perspectives that science provides trustworthy knowledge about the world-not absolutely true, but as true as it will ever get. Having multiple perspectives becomes even more important when we are trying to tackle the astonishing complexity of the world.Science is always a compromise between our need for simpleenough explanations to support (human) understanding, and the unfathomably complex causal structure of reality, especially in areas such as the life sciences (including ecology), or social sciences such as psychology, sociology and economics (e.g.[53]).Perspectival realism frees us of the naive realist idea that science must provide a single unified account of reality-a grand unified theory of everything.As a matter of fact, unified accounts are only possible for simple systems.By contrast, complex systems (in a perspectival sense) are defined by the number of distinct valid perspectives that can be applied to them [31].A complex system is not just a complicated mechanism, like a clockwork or a computer.It can be viewed from many different angles, and none of these views provides all there is to know about the system.Nor do these views simply combine to form a complete or certain body of knowledge covering everything the system is capable of.The more such valid perspectives there are (and the less they simply add up), the more complex the system.Climate resilience is an excellent example of a scientific problem that is incredibly complex in this way, since a full understanding of its causes and consequences requires insights from a variety of actors (researchers, farmers, policy makers, technologists, and impacted populations), and from a variety of fields, ranging from biogeochemistry, ecology, agriculture and hydrology to economics and other social sciences.Without such diverse perspectives there can be no true understanding.Democratic citizen science can be an essential tool to provide more diversity, and thus more robustness in climate research. Last but not least, diversity of perspectives lies at the very heart of scientific progress itself.Such progress can occur in two qualitatively different ways: as the 'normal' gradual accumulation and revision of knowledge, or in the form of scientific revolutions [54].In this context, it is important to notice that when a new discovery is made, the resulting insight is never robust at first [31].Its soundness must be gradually established.This is where Merton's universal scepticism reaches its limitations: if applied too stringently to new insights, it can stifle innovation.As a new insight becomes accepted, other scientific theories may be built on top of it through a process called generative entrenchment (ibid.).The more entrenched an insight, the more difficult it becomes to revise without bringing down the growing theoretical edifice that is being built on its foundation.For this reason, entrenched insights should ideally also be robust, but this is not always the case.Scientific revolutions occur when an entrenched but fragile insight is toppled [31,54].Classic examples are the assumptions that space and time are pre-given and fixed, or that energy levels can vary continuously.The refutation of these two entrenched yet fragile assumptions led to the twin revolutions of relativity and quantum mechanics in early twentieth-century physics (see [46] for a recent review). As we construct and expand our scientific knowledge of the world, more and more insights become robust and/or entrenched.At the same time, however, errors, gaps and discrepancies accumulate.The detection of patterns and biases in those flaws can greatly facilitate scientific progress by guiding us towards new problems worthy of investigation.Wimsatt [31] calls this the metabolism of errors.Basically, we learn by digesting our failures.For this to work properly, however, we need to be allowed to fail in the first place (see [55]).And, yet again, we depend on a multiplicity of perspectives.To detect biases in our errors, we require a disruptive strategy that allows us to 'step out' of our own peculiar perspective, to examine it from a different point of view.This is only possible if alternative perspectives are available.Scientific progress is catalysed by diversity in ways which a naive realist cannot even begin to understand. In summary, we have shown that a diversity of perspectives is essential for the progress of science and for the robustness of the knowledge it generates.This diversity of perspectives, in turn, depends on the diversity of individual backgrounds represented in the communities involved in designing, managing and performing research.Of particular importance in this regard are individuals with a personal stake in the aims of a scientific project.Their perspectives are privileged in the sense of having been shaped by personal experience with the problem at hand, in ways which may be inaccessible to a neutral observer.Such engaged perspectives are called standpoints [56][57][58].Each individual standpoint can broaden the scope and power of the cognitive and technological tools being brought to bear on an issue.This is particularly important in the context of climate resilience, where local experiences and challenges must be considered as an essential part of any problem solution.Being engaged (contra Merton's principle of disinterestedness) is desirable in this context, since it makes sure that proposed problem solutions are both applicable and relevant under a given set of particular conditions.In this way, democratic citizen science can become an essential tool for the production of adequate scientific knowledge.Therefore, it is of utmost importance that the relevant stakeholders are recognized and properly represented in the research process. Science as process The second major criticism that naive realism must face is that it is excessively focused on research outcomes, thereby neglecting the intricacies and the importance of the process of inquiry.Basically, looking at scientific knowledge only as the product of science is like looking at art in a museum.However, the product of science is only as good as the process that generates it.Moreover, many perfectly planned and executed research projects fail to meet their targets, but that is often a good thing: scientific progress relies as much on failure as it does on success (see §3).Some of the biggest scientific breakthroughs and conceptual revolutions have come from projects that have failed in interesting ways.Think about the unsuccessful attempt to formalize mathematics, which led to Gödel's Incompleteness Theorem [59], or the scientific failures to confirm the existence of phlogiston, caloric and the luminiferous ether, which opened the way for the development of modern chemistry, thermodynamics and electromagnetism, respectively [46].Adhering too tightly to a predetermined royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 research plan can prevent us from following up on the kind of surprising new opportunities that are at the core of scientific innovation.Research assessment that focuses exclusively on deliverables and outcomes, and does not integrate considerations about the process of inquiry, can be detrimental to scientific progress. Sometimes, and especially in democratic citizen science, the goal is the journey.Democratic citizen science projects put a strong emphasis on facilitating their participants' individual learning, and their inclusion in the process of inquiry at the level of the research community (e.g.[60]).Furthermore, the problems of how to manage collaborations, data sharing and quality control are no longer peripheral nuisances, but themselves become a central part of the research focus of the project.Democratic citizen science is as much an inquiry into the natural world, as it is an inquiry into how to best cultivate and use humanity's collective intelligence (see [9]).The most valuable outcome of a citizen science project may very well be an improved learning and knowledge-production process.We now turn our attention to this dynamic.In this section, we look at the cognitive activities and research strategies that individual researchers use to attain their epistemic goals.The role of interactions among scientists and their communities will be the topic of §5. The first thing we note is that scientific knowledge itself is not fixed.It is not a simple collection of immutable facts.The edifice of our scientific knowledge is constantly being extended [31].At the same time, it is in constant need of maintenance and renovation (ibid.).This process never ends.For all practical purposes, the universe is cognitively inexhaustible (e.g.[33,61]).There is always more for us to learn.As finite beings, our knowledge of the universe will always remain incomplete.Besides, what we can know (and also what we want or need to know) changes significantly over time (e.g.[46]).Our epistemic goalposts are constantly shifting.The growth of knowledge may be unstoppable, but it is also at times erratic, improvised and messy-anything but the straight convergence path of naive realism depicted in figure 2. Once we realize there is no universal scientific method, and once we recognize the constantly shifting nature of our epistemic goals, the process of knowledge production becomes an incredibly rich and intricate object of study in itself.The aim of our theory of knowledge must adapt accordingly.Classic epistemology, going back to Plato and his dialogue 'Theaetetus' [62], considered knowledge in an abstract manner as 'justified true belief', and tried to find universal principles which allow us to establish it beyond any reasonable doubt.This endeavour ultimately ended in failure (albeit an interesting one; e.g.[63,64]).Naturalistic epistemology, in contrast, goes for a more humble (but also much more achievable) aim: to understand the epistemic quality of actual human cognitive performance [32].It asks which strategies we-as finite beings, in practice, given our particular circumstances-can and should use to improve our cognitive state: what are the processes that robustly yield reliable and relevant knowledge about the world?The overall goal of naturalistic epistemology is to collect a compendium of cognitively optimal processes that can be applied to the kinds of questions and problems humans are likely to encounter.This is a much more modest and realistic aim than any quixotic quest for absolute knowledge, but it is still extremely ambitious.Like the expansion of scientific knowledge, it is a neverending process of iterative and recursive improvement-an ameliorative instead of a foundationalist project (ibid.).As limited beings, we are ultimately condemned to build on the imperfect basis of what we have already constructed. Just like scientific perspectivism, naturalistic epistemology leads to context-specific strategies that allow us to attain a set of given epistemic goals.What is important in the context of our discussion is that different cognitive processes and research strategies will be optimal under different circumstances.There is no universally optimal search strategy for inquiry (or anything else)-there is no free lunch [65].What approach to choose depends on the current state of knowledge and level of technological development, the available human, material and financial resources, and the epistemic goals of a project.These goals may be defined in terms of solving a particular problem, in terms of providing new insights into the structure of reality, and/or in terms of optimizing the research process itself.Choice of strategy is in itself an empirical question.Naturalistic epistemology must be based on history and empirical insights into error-prone heuristics that have worked for similar goals and under similar circumstances before [32].We cannot justify scientific knowledge in a general way, but we can get better at appraising its epistemic value by studying the process of inquiry itself, in all its glorious complexity. One central insight from this kind of epistemology, which is supported by empirical and theoretical evidence, is that evolutionary search processes such as scientific inquiry are subject to what Thomas Kuhn [66] has called the essential tension between a productive research tradition and risky innovation.This classical view in the philosophy of science has since been recast in computer science and popularized as the strategic balance between exploration (gathering new information) and exploitation royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 (putting existing information to work) (for an accessible introduction, see chapter 2 of [67]).It is important to note, however, that we are not really talking about a balance in the sense of a static equilibrium here.The optimal ratio between the two strategies cannot be precisely computed for an open-ended process with uncertain returns such as scientific inquiry (ibid.).Instead, we need to switch strategy dynamically, based on local criteria and incomplete knowledge.The situation is far from hopeless though, since some of these criteria can be empirically determined.For instance, it pays for an individual researcher, or an entire research community, to explore at the onset of an inquiry.This happens at the beginning of an individual research career, or when a new research field opens up.Over time, as the field matures and information accumulates, exploration yields diminishing returns.At some point, it is time to switch over to exploitation.Imagine moving to a new city.Initially, you will explore new shops, restaurants and other venues, but eventually you will settle down and increasingly revisit your favourite places.This is an entirely rational meta-strategy, inexorably leading people (and research fields) to become more conservative over time (see [11,[68][69][70] for evidence on this). Here, we have an example where the optimal research strategy depends on the process of inquiry itself.A healthy research environment provides scientists with enough flexibility to switch strategy dynamically, depending on circumstances.Unfortunately, industrial science does not work this way.The fixation on short-term performance, measured through output-oriented metrics, has locked the process of inquiry firmly into exploitation mode (e.g.[71]).Put differently, exploration almost never pays off in such a system.It requires too much time, effort, and a willingness to fail.It may be bad for productivity in the short term, but is essential for innovation in the long run.This is the game-theoretic trap we discussed in §2.It is sustained by the narrow-minded view that the attainment of the epistemic goals of science can be accelerated simply by maximizing the rate of research output. In this section, we have argued that naturalistic epistemology, an empirical investigation of the process of inquiry itself, could lead us out of this trap.But it is not enough.We also need a better understanding of the social dimension of doing science, which is what we will be discussing next. Science as deliberation The third major criticism that naive realism must face is that it is obsessed with consensus and uniformity.Many people believe that the authority of science stems from unanimity, and is undermined if scientists disagree with each other.Ongoing controversies about climate science or evolutionary biology are good examples of this sentiment (e.g.[2]).To a naive realist, the ultimate aim of science is to provide a single unified account-an elusive unified theory of everything-that most accurately represents all of reality.This kind of thinking about science thrives on competition: let the best argument (or theory) prevail.Truth is established by debate, which is won by persuading the majority of experts and stakeholders in a field that some perspective is better than all its competitors.As Robert Merton [40] put it: competing claims get settled sooner or later based on the principle of universalism.There can only be one factual explanation.Everything else is mere opinion. However, there are good reasons to doubt this view.In fact, uniformity can be pernicious [35].This is because all scientific theories are underdetermined by empirical evidence.In other words, there is always an indefinite number of scientific theories able to explain a given set of observed phenomena.For most scientific problems, it is impossible in practice to unambiguously settle on a single best solution based on evidence alone.Even worse: in most situations, we have no way of knowing how many possible theories there actually are.Many alternatives remain unconsidered [72].Because of all this, the coexistence of competing theories need not be a bad thing.In fact, settling a justified scientific controversy too early may encourage agreement where there is none [35].It certainly privileges the status quo, which is generally the majority opinion, and it suppresses (and therefore violates) the epistemic equality of those who hold a minority view that is not easy to dismiss (ibid.).In summary, too much pressure for unanimity leads to a dictatorship of the majority, and undermines the collective process of discovery within a scientific community. Let us take a closer look at what this process is.Specifically, let us ask which form of information exchange between scientists is most conducive to cultivating and utilizing the collective intelligence of the community.In the face of uncertainty and underdetermination, it is deliberation, not debate which achieves this goal [35].Deliberation is a form of discussion that is based on dialogue, rather than debate.The main aim of a deliberator is not to win an argument by persuasion, but to gain a comprehensive understanding of all valid perspectives present in the room, and to make the most informed choice possible based on the understanding of those perspectives (e.g.[73]).What matters most royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 is not an optimal, unanimous outcome of the process, but the quality of the process of deliberation itself, which is greatly enhanced by the presence of non-dismissible minorities.As Popper already pointed out, the quality of a scientific theory increases with every challenge it receives.Such challenges can come in the form of empirical tests, or thoughtful and constructive criticism of a theory's contents.The deliberative process, with its minority positions that provide these challenges, is stifled by too much pressure for a uniform outcome.As long as matters are not settled by evidence and reason, it is betteras a community-to suspend judgement and to let alternative explanations coexist. It is not difficult to see how deliberation-with its choice-making based on the understanding of multiple perspectives-is particularly important for interdisciplinary and transdisciplinary projects.Interdisciplinary projects are those in which scientists from different disciplines work together, while transdisciplinarity represents the most complex degree of cross-disciplinary collaboration, which aims to transcend the disciplinary boundaries within its domain altogether.Such projects boost scientific innovation when they manage to integrate different perspectives into a cohesive solution (reviewed in [68]).They help science break out of the inexorable tendency of research fields to become more conservative over time (see §4).They are key to generating and enhancing epistemic exploration.But, like other exploratory processes, they need time and effort to establish.Deliberative processes cannot be rushed.To integrate them into our research environment, we need to assess their quality directly. Deliberative processes that facilitate collective intelligence work best with relatively small groups of deliberators, each with an engaged and non-dismissible standpoint on the matter at hand.However, many scientific projects-especially those of democratic citizen science-require human and material resources that go beyond the capabilities of small groups.This is particularly relevant in the field of climate resilience, where the number of impacted citizens reaches the planetary scale.In such cases, the deliberation process needs to be based on a suitable community structure in order to scale.This is why an increasing amount of science is done by teams [68].There is empirical evidence that small teams of investigators are more innovative than isolated individuals or large-scale consortia [69].This is because they strike a delicate balance between a diversity of standpoints and the ability of its members to productively engage in deliberation.The deliberative process can then be rescaled as an interaction between teams, resulting in a hierarchy of interactions that enable collective intelligence at multiple levels.This is an area of investigation that needs much more attention than it currently receives. An ecological vision for citizen science In § §3-5, we have outlined the three main pillars of an emerging epistemology that is tailored to the needs of democratic citizen science, but is equally applicable to academic research in general.We see the kind of citizen science it envisions as paradigmatic for a more participatory research environment, adequate for the complex planetary-scale problems humanity is facing today.Its highest aim is to foster and put to good use the collective intelligence of humanity.In order to achieve this, we need research communities that are diverse, engaged, representative, and democratic.What we propose here is an 'ecological' vision for a science which supports diversity, inclusion, and deliberation.This vision stands in stark contrast to our current industrial model of doing science (see §1).The two approaches are compared in table 1.Note that both models are highly idealized.They represent different ideals of how research ought to be done-two alternative ethos for science. We have argued that the naive realist view of science is not, in fact, realistic at all.In its stead, we have presented an epistemology that adequately takes into account the needs and capabilities of limited human beings, solving problems in a world of planetary-scale interconnected complexity.The ecological research model proposed here is less focused on direct exploitation, and yet it has the potential to be more productive in the long term than the current industrial system.However, its practical implementation will not be easy, due to the game-theoretic trap we have manoeuvred ourselves into (see §2).Escaping this trap requires a deep understanding of the social and cognitive processes that enable and facilitate scientific progress for all.Finding such processes is an empirical problem, which is only beginning to be tackled and understood today.The argument we are making here is that such empirical investigations must be grounded in a suitable epistemological framework, and a correspondingly revised ethos of science, able to provide philosophical and ethical guidance for our attempts to improve our methods for scientific project management, monitoring, and evaluation through experience and experimentation.These methods must acknowledge the contextual and processual nature of knowledge production.They need to focus directly on the quality of this process, rather than being fixated exclusively on the outcome of scientific projects.They need to encompass multiple levels-from the individual investigator to their royalsocietypublishing.org/journal/rsosR. Soc.Open Sci.10: 231100 research community to the context of society in general.And they need to account for a diversity of epistemic goals. Unfortunately, such explorative efforts are likely to fail unless we break out of the restrictive framework we have built around ourselves through an ever stronger focus on measuring research output, detached from any consideration of the cognitive and deliberative processes that generate it.Before we can achieve anything else, we must use our new appreciation of the process of inquiry to move beyond metric fixation, beyond the cult of productivity [26].As a first step, this requires a broader awareness of the underlying philosophical issues.While the epistemological arguments we have presented here are well known among philosophers of science, they are virtually unheard of among practising scientists, science stakeholders and the general public.This urgently needs to change before we can have the kind of conversations that lead to sustainable changes in mindset and policy.Democratic citizen science is one of the most important initiatives towards increasing diversity, representation, and participation in science today.In addition, it is one of the main sources for new insights into the process of inquiry, and its process-oriented assessment.For these reasons, citizen science must play a key role in the upcoming transition from an industrial to an ecological model of doing research.In the final section of our paper, we will discuss the kind of measures we could experiment with to improve the assessment of citizen science projects along the lines of the philosophical argument we have presented above. Beyond metric fixation: implications for project evaluation Our philosophical analysis points to a central conclusion: any proper evaluation of a scientific project must include an epistemic appraisal of its process of inquiry, including an assessment of the material, cognitive, deliberative, and organizational practices involved in knowledge production.It is not enough to judge a project by its outcome alone-the number of scientific publications it has produced, let us say, or the amount of factual knowledge its participants are able to regurgitate at a final debrief or exam.This central insight also underlies a recently proposed multidimensional evaluation framework for citizen science projects, which makes a fundamental distinction between process-based and outcome-based aspects of assessment [27,28].It identifies three core dimensions to citizen science: scientific, participant, and socio-ecological/economic. For each of these, it defines criteria of evaluation concerning both aspects of 'process and feasibility' as well as 'outcome and impact' (figure 3).Such a framework can be applied not only to strategic planning, the selection of specific projects to be funded, and impact assessment after a project is finished, but also to monitor and, at the same time, to mentor participants and facilitate the progress of a project while it is running.Evaluation itself becomes a learning process-learning about learning-that supports participatory self-reflection and adaptive management practices [28]. Due to the epistemological nature of our argument, we focus mainly on the scientific knowledge dimension of this evaluation framework here, although epistemic processes underlying individual and collective learning and their wider societal and ecological impact are also subjects highly deserving of closer philosophical attention.In this context, it is worth repeating that not all citizen science projects have their main focus on the production of novel scientific theories, or the fundamental revision of existing scientific frameworks.Some are geared toward applied knowledge, or efforts in community-level data collection.Moreover, non-epistemic goals-changes in individual attitudes and behaviour, cultural practices or policies, for example-can be equally or even more important in some cases.For this reason, the evaluation framework in figure 3 is designed to be flexible and adaptive in terms of weighting different criteria.Moreover, while we limit our discussion to process-based aspects of scientific knowledge production, we do not want to leave the impression that evaluation of outcome is unimportant.Both aspects need to be considered together.What we do want to do here is to highlight the fact that process-based assessment remains undervalued, underdeveloped, and underused in the current system of academic research.Our analysis provides epistemological reasons for addressing this problem.Developing adequate approaches to processbased assessment requires an improved understanding of suitable practices of individual and community-level knowledge production that can actually be carried out in today's research environment. Beyond emphasizing processual and participatory methods of evaluation, there is another fundamental point that arises from our analysis: many of the features that make democratic citizen science (and science in general) worthwhile and productive are impossible to capture by performative metrics.For example, the originality, relevance, and value of a scientific insight cannot be quantified objectively, because notions of 'originality', 'relevance' and 'value' contain fundamentally subjective and radically context-sensitive facets that are crucial to their meaning.Similarly, there is no standardized algorithm to assess the robustness or soundness of a piece of scientific knowledge.Instead, proper robustness analysis requires a careful comparison of scientific perspectives and an assessment of their independence from each other, which cannot be done without deep insight into the research topic and all the approaches that are being compared [31].Standardized measures can support, but never fully replace judgement based on experience.Similarly, there is no metric for the generalizability or the adaptiveness of a scientific result.The range of circumstances under which some theory or insight may be usefully applied is impossible to predict, or even prestate [61,74].Discovery cannot be planned in this sense.Much of scientific inquiry is driven by serendipitous coincidences, historical accidents, which cannot be captured by any predictive measure based on past evidence alone. Thus, discovery cannot be forced, but it can be facilitated by providing an environment that is conducive to it.Our epistemological framework implies that this can be achieved by incentivizing collaborative processes and deliberation based on a diversity of standpoints.Obviously, this same argument also applies to the assessment of the wider socio-ecological implications of a project, its stakeholder engagement, its social embeddedness, and so on [28].Each research project should be assessed under consideration of its particular scientific and societal circumstances, as well as its particular epistemic and non-epistemic goals.Even so, much of its value will only become evident in hindsight.Trying to define one-size-fits-all metrics or numerical indicators for qualities such as originality, relevance, robustness, adaptedness, or generalizability is bound to be counterproductive, because each and every scientific project, and the knowledge it generates, is different.Generalized abstraction ignores situation-dependent nuances, which may be essential for the success of a project and can only be assessed qualitatively and in retrospect. Finally, there is another problem that arises in systems where rewards and punishments no longer depend on professional judgement-based on personal experience, honesty, dedication, and talentbut on quantitative indicators implemented as standard metrics of comparative performance.Such systems become vulnerable to metric gaming [26].When a metric becomes the target of the measured royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 system, Goodhart's Law applies, which states that such metrics are no longer good indicators for the system's original purpose.Efforts become channelled into optimizing performance as measured by the metric, often in ways that are not conducive to the system's original goals.This happened, for example, to the US school system after the introduction of standardized testing, which led to widespread teaching to the test (ibid.).Similarly, surgeons who are rated on the number of their successful operations often refuse to take on difficult cases (ibid.).Metric gaming is also taking over the academic research system, where an unhealthy fixation on publication metrics leads to risk avoidance and the short-term optimization of personal research output to the detriment of community-level, long-term progress.Somewhat ironically, this trend is measurable: while the content of individual scientific publications is progressively diminishing, approaching what has been called the minimal publishable unit of information, the number of authors per paper is rapidly increasing (e.g.[75]).These trends are empirical signs of an academic system that is being systematically manipulated.Such a system no longer rewards those who do the best work, but those who are most efficiently gaming the metric, and hence the reward system in general. All of this poses a formidable challenge for scientific project evaluation.On the one hand, we really do need methods to compare the quality of scientific projects: how else are we going to implement a fair and rigorous system for strategic planning, funding, monitoring, and assessment in research?On the other hand, we know that the value of a scientific project is radically context-dependent, and that standardized metrics make a system vulnerable to being gamed.As we have seen in §3, this does not necessarily have to lead us into relativism, considering any project as good as any other.There are criteria by which we can assess the promise and importance of a project, or the robustness of the knowledge it produces.What we need then, if we want to adopt an ecological model of citizen science, is an approach to evaluation, grounded in a perspectival, naturalistic, deliberative epistemology that is flexible and adaptable to the specific needs and circumstances at hand, and yet rigorous in its approach to epistemic appraisal.An example of such an evaluation model was recently used to establish a community-driven review system for rapidly adaptive micro-grants [76]. We have said this before, but it bears repeating: the essential first step towards such an approach is to overcome our current metric fixation [26].Instead of being based on a set of fixed standards and metrics, project assessment ought to be grounded on shared values and procedures, themselves constantly subject to evaluation.To attain that goal, scientific assessment should not only evaluate the quality of the deliberative process of inquiry, but must itself become a deliberative, participatory, and democratic process. Second, we need to carefully choose appropriate procedures to evaluate both quantifiable and nonquantifiable aspects of a project, and how they compare with alternative approaches in terms of achieving its specific goals.These procedures should be adapted to context, transparent, flexible, and they should include an element of self-evaluation.One suitable model is co-evaluation [77], an approach to assessment that includes all actors involved in or affected by a project in an iterative process and is based on methods from participatory action research (e.g.[78][79][80]).On top of this, there must be a meta-level process that evaluates the evaluators as they assess a project, guided by deliberative procedures.Finally, assessment must include an evaluation of the quality of this deliberative process itself.More than resembling the hierarchical mechanism of a clockwork, this method of project assessment imitates the self-regulatory and homeostatic dynamics of a living organism. Conclusion In this paper, we have introduced an epistemological framework that can serve as the foundation for the development and adaptation of new descriptors and procedures for project evaluation in democratic citizen science and academic research in general.This framework is based on the three pillars of perspectival realism ( §3), process thinking in the form of naturalistic epistemology ( §4), and deliberative practice ( §5), leading to what we have called an 'ecological model' of doing research ( §6).Perspectivism implies that the range of backgrounds and motivations of individual researchers in a community greatly influences the kind of questions that can be asked, the kind of approaches that can be used, and the kind of explanations that are accepted in a given research and innovation field.Naturalistic epistemology focuses our attention on the quality of the cognitive processes leading to a given research output, while deliberative practice emphasizes the community-level social dynamics that are required to enable collective intelligence.Together, these pillars lead to a new research ethos royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 231100 that values diversity, inclusion, and good communication much more than the traditional Mertonian approach to science (see § §2 and 6). We have described the implicit amalgamation of positivist, Popperian and Mertonian ideas in the minds of scientists and stakeholders as 'naive realism' ( §2).It could be argued, though, that our own vision of democratic citizen science is itself naive.In fact, Mirowski [4] has characterized open science (and citizen science with it) as something even worse: a pretext to extend neoliberal free-market thinking, with the aim of enabling platform capitalism (as exemplified by online giants such as Google and Facebook, or publishing corporations such as Elsevier) to build commercial monopolies from the systems of knowledge production.We are sympathetic to Mirowski's criticism, but emphasize that what he describes is a citizen science as it exists (and struggles) in the current status quo of the industrial system.Our attempt to sketch a more 'ecological' epistemological framework for academic research could be seen as an attempt to provide the philosophical foundations for the new 'political ontology' and the 'economic structure' Mirowski is calling for (ibid.).We are in no way naive enough to think this will be easy to implement under the current socio-political circumstances, or that it will be achieved in some sort of utopian way.Instead, we see the new ethos of science we are outlining here as something that can guide and inspire us while working pragmatically towards a more humane and sustainable research system based on more democratic values and procedures. The main feature of our ecological model of research-what makes it resilient towards attempts at gaming the rules-is its adaptive flexibility: it adjusts itself to the circumstances of each project to be evaluated-its epistemic and non-epistemic aims, the backgrounds and motivations of its participants, and the nature of its particular research question and methodology.It employs a situated processbased quality assessment that relies on shared values and procedures, rather than standard metrics (which may still be used to support it, of course, but are no longer the only evaluative tool).Its adaptive nature renders it more resilient against attempts at gaming the system.The assessment process becomes a learning process itself, which can dynamically react to novel circumstances and challenges (see §7). Our framework requires that we pay much more attention to the process of inquiry than in a traditional system, where evaluation is largely based on immediate and measurable research outcome.In particular, we recommend quality assessment to focus on the aspects of diversity, inclusion, and deliberation.The evaluation of the potential of a project should be combined with constant monitoring and facilitation of the research process.Are all relevant standpoints of impacted stakeholders represented in the community?Do project participants feel they are heard and can make a relevant contribution to the project?Is the deliberative format properly facilitated?Does it enable high-quality cognitive engagement of participants with the research problem at hand? Do participants understand the ethos of doing scientific research and innovation?Do they understand the criteria by which they will be evaluated?Are they given enough autonomy?Are they allowed to fail, while still having their efforts appreciated?Can they disagree with the majority view during deliberation?Can they comment on and contribute to the evaluation of their efforts themselves?This kind of process-focused assessment and facilitation allows a project to be deemed a success, if its process was properly implemented, even if the desired output may not have materialized at the end of the project.It allows participants and evaluators to jointly learn from their successes and (often more importantly) from their failures.And it generates a more collaborative and positive atmosphere in which to undertake creative work.Such a system cannot compete with industrial science on short-term efficiency.It takes time and effort to implement, and the deliberative process is optimized for participation and learning, rather than production.In the long run, however, this system has the potential to be more productive and innovative than the present one.It provides a way for exploration to reenter the world of academic research, allowing us to escape the local search maxima that the game-theoretic trap of the cult of productivity has gotten ourselves stuck on. Figure 1 . Figure 1.The three pillars of our ecological model for scientific research.See text for details. Figure 2 . Figure2.Naive realism suggests that the universal scientific method leads to empirical knowledge that approximates a complete understanding of reality asymptotically (represented by exponential functions in this graph).Scientific progress does not depend in any way on the backgrounds, biases, or beliefs of researchers, which are filtered out by the proper application of the scientific method.According to this view, simply applying increased pressure to the research system should lead to more efficient application of the scientific method, and hence to faster convergence to the truth.See text for details. Figure 3 . Figure 3.An assessment framework for democratic citizen science.Reprinted with permission from [28]. Table 1 . Two idealized models for scientific research.This table compares different emphases exhibited by 'industrial science' versus 'ecological science'.Note that both visions represent ideals, which are rarely attainable in practice.Most scientific projects will come to lie somewhere along the spectrum between these two extremes.See text for details.
v3-fos-license
2018-04-03T05:48:59.092Z
2016-07-14T00:00:00.000
14775113
{ "extfieldsofstudy": [ "Medicine", "Biology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/srep29553.pdf", "pdf_hash": "4456bc97abddd88c31c2a529819b11914af61e9d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:90", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Biology" ], "sha1": "4456bc97abddd88c31c2a529819b11914af61e9d", "year": 2016 }
pes2o/s2orc
Breeding signature of combining ability improvement revealed by a genomic variation map from recurrent selection population in Brassica napus Combining ability is crucial for parent selection in crop hybrid breeding. The present investigation and results had revealed the underlying genetic factors which might contribute in adequate combining ability, further assisting in enhancing heterosis and stability. Here, we conducted a large-scale analysis of genomic variation in order to define genomic regions affecting the combining ability in recurrent selection population of rapeseed. A population of 175 individuals was genotyped with the Brassica60K SNP chip. 525 hybrids were assembled with three different testers and used to evaluate the general combining ability (GCA) in three environments. By detecting the changes of the genomic variation, we identified 376 potential genome regions, spanning 3.03% of rapeseed genome which provided QTL-level resolution on potentially selected variants. More than 96% of these regions were located in the C subgenome, indicating that C subgenome had sustained stronger selection pressure in the breeding program than the A subgenome. In addition, a high level of linkage disequilibrium in rapeseed genome was detected, suggesting that marker-assisted selection for the population improvement might be easily implemented. This study outlines the evidence for high GCA on a genomic level and provided underlying molecular mechanism for recurrent selection improvement in B. napus. Crop breeding programs have generated excellent resources that can be used to improve agronomic traits and identify favorable loci affected by artificial selection. Analysis of genetic diversity, allele frequency, and heterozygosity are used to find genomic alterations and genetic effects on the traits in different generations or sub populations 11 . Additionally, this has been found to be a good approach for scanning genome regions, even candidate genes that underline selection 7 . In chicken, 82 putatively selected regions with reduced levels of heterozygosity are identified 12 . In a cattle population, genetic changes are detected, and 13 genomic regions were found to affect milk production 13 . Moreover, several functional genes were verified in some selected regions in cattle 14 . Similar studies have been carried out in other animals 15,16 . In miaze, a set of genes (2~4% of 774 genes) are found to have undergone artificial selection during domestication 3 . Scanning of few known functional genes involved in maize domestication has indicated selection signatures on the genomic level 4,17 . Furthermore, several chromosome segments and genes were revealed by comparing genetic variation between wild and cultivated populations in soybean 5 . As for rice, a genealogical history analysis of overlapping low diversity regions can distinguish genomic backgrounds between indica and japonica rice populations, and 13 additional candidate genes were identified 18 . Another study found 200 genomic regions, spanning 7.8% of the rice genome that had been differentially selected between two putative heterotic groups 19 . These studies have successfully investigated genome-wide genetic changes during domestication and modern breeding. The results can provide useful information to reveal the agronomic potential of a breeding line and genomic loci. Rapeseed (Brassica napus; AACC, 2n = 38) is one of the most important oil crops worldwide. Rapeseed originated from a doubling event between Brassica rapa (AA, 2n = 20) and Brassica oleracea (CC, 2n = 18) along the Mediterranean coastline 10,000 years ago 20,21 . It is considered as a young species because of a short domestication history spanning only 400-500 years 22 . In addition to several other factors, modern breeding has substantially increased production, especially through heterosis. In a hybrid breeding program, combining ability is a crucial factor for parental line selection and for the development of superior hybrids. Evaluation of the combining ability using traditional methods is labor intensive and time-consuming, and may create a bottleneck in hybrid breeding 23 . Therefore, dissection and comparison of the genetic basis of combining ability can be crucial for breeding. Combining ability was defined as a complex trait in plants, and was evaluated by several techniques, including molecular markers, QTL mapping, and genome scan approaches [24][25][26] . There have been limited investigations carried out to evaluate the genetic basis of combining ability in rapeseed. During rapeseed breeding history, heterosis and double-low varieties (low erucic acid and low glucosinolate) were mainly used to produce higher yield and better quality at the cost of genetic diversity 27,28 . Recently, new genetic resources are used to increase the genetic basis of rapeseed, including the artificially synthesized B. napus generated from B. oleracea and B. rapa 29 , the subgenome materials 30,31 . Multigenerational improvement and a recurrent selection program are required before utilizing these new materials. In our work, genomic SNP markers were used to analyze the breeding signatures of GCA as revealed by the genetic variation in a recurrent selection population. The objectives of our study were (1) to estimate genetic diversity of genome-wide SNPs in different groups of the rapeseed restorer population, (2) to detect the putatively selected regions and SNPs associated with breeding efforts on the genomic level, and (3) to identify known important QTLs associated with rapeseed agronomic traits in selected regions. These findings might be of potential use in improving the rapeseed breeding. Results Phenotype variations in yield and yield-related GCA. Plant yield from the population of 175 families and 525 hybrids, were analyzed with two replicates in three different environments. GCA of each parental line was estimated statistically using the phenotype data sets. Extensive phenotype variations were observed ( Table 1). The mean yield of three environments were 13.93 g, 7.41 g, 8.08 g per plant, respectively, and varied from 4.36~36.87 g in Wuhan, from 1.62~15.92 g in Xiangyang and from 2.24~37.82 g in Yichang. The plant yield had high coefficients of variation in the three environments, suggesting that the yield of the rapeseed was a typical quantitative trait and was substantially affected by the environment. The mean value of GCA (Table 1) Genetic variation detecting across the regions of the specific loci. SNPs were used to detect the genetic variation of three specific loci in rapeseed genome: erucic acid related genes at the BnFAE1.1, BnFAE1.2 loci on A8 and C3 chromosomes 32 , and the Ogrua CMS restorer gene Rf o loci on C9 chromosome 33 . It consistently showed that closer the target loci, lower the genetic variation (Fig. 1). These findings indicated that our selection program have been carried out efficiently. Moreover, the evaluation method via genetic diversity could be of potential use in breeding improvement. Linkage disequilibrium (LD) in the R population. R 2 was used to calculate the LD level. For r 2 = 0.2, LD level occurred at approximately 0.8 Mb, 4.8 Mb, and 2.4 Mb for A and C subgenomes, and AC genome, respectively (Fig. 2). When r 2 decayed to 0.1, LD values increased to 3 Mb, 8 Mb and 6.5 Mb for A and C subgenomes, and AC genome, respectively. The C subgenome had a larger LD value than A subgenome. As for chromosomes of the two subgenomes, LD of chromosomes in A subgenome was highly consistent, while variation was detected in C subgenome. The LD of the C4 chromosome was higher than 6 Mb when r 2 = 0.1 (Fig. S2). C1 and C2 chromosomes showed almost no LD decay. Genetic variation of the two subgenomes was also evaluated. The A subgenome had a little higher genetic diversity than the C subgenome (Table 3). By detecting the changes in genetic diversity between the selected and basic populations, we found a greater decrease in the C subgenome (2.7%) as compared to the A subgenome (1.55%). Selected regions and candidate QTLs analysis. Scanning of genomic regions indicated a reduction in genetic diversity. In total, we identified 376 selected regions, covering 3.03% (21.26 Mb) of the assembled genome (Table 4; Fig. S3). More than 96% of these regions were distributed on the C subgenome ( Fig. 3; Table 4). C6 chromosome had the largest size of selected regions (4.56 Mb), while A3 had the smallest (0.02 Mb). Furthermore, A1, A5, A7, A8, A9, A10 and C5 chromosomes had no distribution of selected regions. The mean size of selected regions for each chromosome on A and C subgenomes and AC genome were 0.14 Mb, 2.55 Mb, and 1.52 Mb, respectively. The C subgenome had a larger distribution of selected regions than the A subgenome. Many QTLs related to yield and yield-related traits were located in these selected regions (Supplementary Table S1) which likely contributed to the increase in rapeseed yield and GCA. Among the 19 chromosomes of the rapeseed genome, we found differences in the distribution of genetic diversity in the selected regions. In particular, 35 . QTL hot spots contained important QTLs for rapeseed yield and Table 3. Genetic diversity of the genome in R population and the selected population. Chr represent the chromosome; A, C and AC represent the A and C subgenome, and the whole genome of rapeseed, respectively. a The average value of genetic diversity (π ) for the R population. b The average value of genetic diversity (π ) for the the selected population. c The decrease ratio of genetic diversity. yield-related traits were also detected in the region on chromosome C3 (Supplementary Table S1). All these QTLs in the selected regions provided a potential resource for rapeseed breeding, and selection for these QTLs for rapeseed genetic improvement might lead to low genetic diversity in these regions, but increase in rapeseed yield. Pedigree breeding history reproduction. The genomic changes that occurred between the genealogy lines were detected. We reconstructed the recombination events that gave rise to specific inbred lines zhongsh-uang5 and zhongshuang4, which were both produced from zhongyou821. We traced the chromosome segments through pedigree breeding of the two lines. In total, zhongshuang5 inherited 15.41% of its genome from the ancestral line zhongyou821 while zhongshuang4 inherited 34.06% (Table 4; Fig. S4). Zhongshuang5 inherited 24.17% Table 4. Summary of size and distribution of selected regions and IBD regions between the genealogy lines. Chr represents the chromosome; A, C and AC represent the A and C subgenomes, and the whole genome of rapeseed, respectively. Zy821 stands for zhongyou821; zs5 stands for zhongshuang5; zs4 stands for zhongshuang4. a Genome size covered by all SNPs on each chromosomes. b Summary size of selected regions on each chromosomes. SR stands for selected region. c Summary size of IBD regions on each chromosomes between zy821 and zs5. IBD is an abbreviation of identity by descent. d Summary size of IBD regions on each chromosomes between zy821 and zs4. e The percentage of the IBD regions shared the chromosome between zy821 and zs5. f The percentage of the IBD regions shared the chromosome between zy821 and zs4. of the A subgenome and 10.26% of the C subgenome from zhongyou821, while zhongshuang4 inherited only 14.37% of the A subgenome but 45.63% of the C subgenome from zhongyou821. Out of the 19 chromosomes, six chromosomes (A1, A5, C1, C6, C7 and C8), showed that more than half of their chromosome fragments were inherited from zhongyou821 into zhongshuang4, particularly in C6 and C7, where almost the whole chromosomes were found to be inherited. However, 8 chromosomes (A2, A3, A4, A6, A7, A8, A9, and C2) were not inherited into zhongshuang4. In 45.63% inherited component, we observed that 84.39% was from the C subgenome. These findings were consistent with the analysis of the selected regions. Fixed SNP provided a reference index for population improvement. By detecting the allele frequencies of genome-wide SNPs, we identified a total of 403 Fixed SNPs from the genotype data sets. There were 214 of these Fixed SNPs from the A subgenome and 189 from the C subgenome (Fig. 5). The allele frequencies of these SNPs were fixed to 0 or 1 in the selected group and subsequent generations. These loci have lost other alleles and showed monomorphism in the subsequent population. Discussion A yield-improving plateau occurs for a limited genetic diversity 2 . Demonstration and breeding program reduce the crop genetic diversity significantly 1,2 . To enhance the diversity, we used the contents of the subgenomes from the relation species in Brassica. By the breeding method of recurrent selection for GCA improvement, some desirable loci were maintained in the population and others undesirable loci were deleted from the population. Our analysis provided useful information to exhibit genetic base of GCA on rapeseed. The LD value of this population was larger than the natural population reported previously. Breeding selection for the favorable alleles would increase the LD level between loci in genome 36 . In this study, we observed strong LD between SNPs separated up to 2.5 Mb (r 2 = 0.2). This value was higher than the LD value obtained in previous studies [37][38][39] , which have indicated LD levels at about 500 Kb, 700 Kb, and 2 cM, respectively. In these studies, researchers have used the resource populations collected from all over the world which contained higher rapeseed genetic variation. Contrastingly, in our study, the population was derived from several artificially synthesized B. napus and the subgenome materials. Afterwards, it was improved for subsequent generations which might have contributed to the higher LD. These findings might be useful for marker-assisted recurrent selection. Our results also demonstrated higher LD in the C subgenome, especially for C1, C2 and C4 chromosomes, which was consistent with the previous results 40,41 . Possibly, this could be explained by several reasons: the C subgenome had a lower level of genetic variability than the A subgenome, and the C subgenome might be under a more intense selection pressure in our breeding program. Polygenetic analysis also showed a decreasing trend in the diversity of the C subgenome than that of the A subgenome (Table 3). It had also provided a favorable evidence for the higher LD of the C subgenome. The C subgenome is a repository for a wider range of selected regions with favorable loci contributing to rapeseed agronomic traits. By detecting the changes in genomic diversity, we identified 376 genomic regions and covered 3.03% (21.26 Mb) of the rapeseed genome. Many important QTLs related to yield and yield-related traits were located in some of these selected regions (Supplementary Table S1). In particular, some of these genomic regions harboured QTL hotspots (for one trait or multiple traits) or significant QTL reported in other studies (Fig. 4). We noticed that more than 96.05% of these selected regions distributed on the C subgenome ( Fig. 3; Table 4) and only about 3.95% was distributed on the A subgenome. This indicated that the C subgenome had sustained more pressure in the selection program or the C subgenome contributed more to the yield-related GCA than the A subgenome. The differences in the genome background between the genealogical lines further support this conclusion (Table 4; Fig. S4). In China, for improvement of the adaptive traits of the European and Japanese varieties, breeders have lead to the introgression of the A genome components of B. rapa into the B. napus genome 31,42 . This process has enhanced the genetic diversity of A subgenome in Chinese rapeseed. However, the breeding potential of C subgenome has not been developed and utilized much. The genetic background of our population contained European winter-type rapeseed, which has higher genetic variation of C subgenome than the A subgenome 37 . Furthermore, the subgenome materials (A r A r C c C c ) have been introgressed with the C c genome from B. carinata, and artificial synthetic materials have been introgressed with the C o genome from B. oleracea, which might also contributed to the increased genetic variation of C subgenome in B. napus. These new genetic components of the C subgenome might potentially improve the rapeseed yield. Results of the present investigation, along with a deeper understanding of heterosis and changes in breeding programs have indicated that the C subgenome needs to be fully developed in rapeseed hybrid breeding. Recurrent selection has been established as a very useful method for plant breeding 43,44 . The process can break the linkage of disadvantageous alleles and pyramid favorable alleles through sustaining recombination and selection. In this study, we used the recurrent selection method to improve the GCA level of the R population. The top 20% individuals with high GCA were selected for the next generation. Genetic analyses showed that there were many genomic regions under selection. These regions might play an important role in rapeseed breeding. We suggest that most favorable alleles might be accumulated through MAS and standing selection. This might assist in the development and improvement of potential rapeseed. In summary, we have conducted a comprehensive analysis of changes in genomic variation and identified a number of genomic regions and loci subjected to selection. Firstly, we found a slightly higher level of genetic diversity for the A subgenome as compared to the C subgenome. Both of the subgenomes had a higher LD, and might be beneficial for MAS. Secondly, the program for breeding selection might decrease the genetic diversity of the population and some allelic variations would disappear or approach to fixation. Thirdly, most of the selected regions were distributed on the C subgenome, which indicated that the C subgenome might have been under stronger selection pressure, or contributed more towards GCA improvement in rapeseed hybrid breeding. Finally, we have identified several potential selection targets, genomic regions and loci, which provided further insight into rapeseed research and improvement. Materials and Methods Plant materials, phenotype evaluation, and GCA estimation. We used two new types of rapeseed (artificial synthetic B. napus and subgenomic materials) and winter type rapeseed in the present investigation: (1) 41 artificial synthetic B. napus from the University of Goettingen, Germany, were crossed with three winter type lines (SW0736, SW0740 and SW0784) from Sweden in 1999; the F 5 families were crossed with Ogura-INRA CMS lines and restorer line R2000 in 2004. (2) Seven subgenomic materials (A r A r C c C c ) 45 , 2 Pol CMS restorers (5148R and 6178R) and yellow seed coat variety No2127 were crossed with R2000. F 1 obtained from (1) and (2) was used to construct the recurrent selection population in 2005 in Huazhong Agricultural University, China. A recurrent selection program was used to improve the GCA level of the population. The previous populations were randomly pollinated in an isolated environment. The sterile plants were harvested and the seeds were used to construct the next generation population. Meanwhile, seed quality (oil content, glucosinolate and erucic acid) was considered as an important breeding goal. In 2012, we randomly selected 175 plants (with the Rf o gene) and crossed them with three different testers (Yu7-120, Yu7-126 and Yu7-140) to produce hybrid seeds following NC II design 46 , resulting in a total of 525 hybrids. All hybrids, and 178 parental lines, were sown in three semi-winter rapeseed environments (Wuhan, 29°58′ N, 113° 53′ E; Xiangyang, 32° 04′ N, 112° 05′ E and Yichang, 30′ 40′ N, 111° 45′ E) in China. Field trials were followed as completely random design with two replications at each location. General combining ability (GCA) of each parental lines was calculated using the formula: gi = yi-ŷ, where gi stands for GCA of parental line, yi and ŷ each stand for the mean of crosses with same parent Pi and the mean of all crosses, respectively 47 . Based on the GCA, we set 20% as the selection intensity, which could not rapidly decrease the genetic diversity of the population. Afterwards, 35 lines with a high GCA were selected from the population, defined as the selected population or group. The other three genealogical lines (zhongyou821, zhongshuang4, and zhongshuang5, Fig. S1) were used to detect the genome changes in pedigree breeding. The zhongyou821 is highly considered for rapeseed breeding in China. Many elite inbred lines including both open pollination cultivars and hybrid parents, were developed from this line. For example, both zhongshuang4 and zhongshuang5 are derived from zhongyou821, and bred as open pollination cultivars. Recently, F 1 hybid of zhongshuang4 and Pol CMS lines is found to exhibit excellent heterosis performance. Therefore, zhongshuang4 is considered as a good restorer line and used to develop several other hybrid cultivars and restorer lines. SNP filtering and genotype analysis. Genomic DNA was extracted from young leaves using the cetyl triethyl ammnonium bromide (CTAB) method 48 . The Illumina BrassicaSNP60 Bead Chip containing 52,157 SNPs was employed to genotype this panel of rapeseed. The experiment followed the manufacturer's protocol as described by Illumina Company. (http://www.illumina.com/technology/infinium_hd_assay.ilmn). The SNP data was clustered and called automatically using the Illumina GenomeStudio genotyping software. SNPs with no polymorphism and missing value > 10% were excluded. The source sequences of the remaining SNPs were identified through BlastN searches against the reference genome sequence of Darmor-bzh 49 (http://www.genoscope. cns.fr/brassicanapus). SNPs with an ambiguous physical position or multiple blast-hits were also excluded from the genotype data sets. Polygenetic and linkage disequilibrium analysis. Genetic diversity (π ), polymorphism information content (PIC) and alleles frequencies of each SNP on 19 chromosomes were estimated by the PowerMarker software 50 . Linkage disequilibrium (LD) between SNPs was calculated by all markers using the TASSEL software version 5.1 51 . LD decay was evaluated on the basis of the r 2 value and corresponding distance between two SNPs. Selected regions, Fixed SNP and candidate QTL detecting. To calculate diversity changes across the genome, a sliding window method was used to analyze each chromosome separately, with a window size of five SNPs and a sliding step of two SNPs. Ratio of the genetic diversity value of each window between selected and basic populations was used to identify genomic regions affected by selection, which was estimated by the formula: π Ratio = π basic /π selected . We selected the top 5% windows as candidate regions for further analysis. In addition, we analyzed many reported QTLs of rapeseed yield and yield-related traits. If the closely linked markers or the mapped interval were located in or overlapped with selected regions, we considered them to be candidate selected QTLs. We also calculated the allele frequencies of each SNP on the 19 chromosomes, and identified the SNPs which allele frequencies were changed to a hundred percent in the selected population and defined such SNPs as Fixed SNP. Genome changes detecting during the pedigree breeding. We used the Beagle4.1 software 52 to detect the chromosome segments of identity by descent (IBD) 53 between the two half-sib sister lines (zhuangsh-uang4 and zhongshuang5) and their common ancestor (zhongyou821) by genome-wide SNP markers. The P value of the significant level was set as 1 × 10 −7 . Uncertain regions (not defined IBD segments) were equally appropriate into the two adjacent blocks. We surveyed the inherited proportion of their genome from zhongyou821 and set different colors for chromosome segments according to the type of IBD.
v3-fos-license
2022-12-14T16:04:30.274Z
2022-12-12T00:00:00.000
254622534
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://doi.org/10.1002/wat2.1624", "pdf_hash": "a53e818807652d65f6b8c751b99e7fa1326ac53b", "pdf_src": "Wiley", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:92", "s2fieldsofstudy": [ "Geology" ], "sha1": "8c9c8f762d8a4ceebbd202fbf2a2fa367269a500", "year": 2022 }
pes2o/s2orc
Reanimating the strangled rivers of Aotearoa New Zealand Contemporary management practices have artificially confined (strangled) river systems in Aotearoa New Zealand to support intensified land use in riparian areas. These practices work against nature, diminishing the functionality and biodiversity values of living rivers, and associated socio‐cultural relations with rivers. River confinement can accentuate flood risk by promoting development in vulnerable locations and limiting the flexibility to adapt to changing climate, prospectively accentuating future disasters. To date, uptake of space‐to‐move management interventions that seek to address such shortcomings is yet to happen in Aotearoa New Zealand. This is despite the fact that such practices directly align with Māori (indigenous) conceptualizations of rivers as indivisible, living entities. Treaty of Waitangi obligations that assert Māori rights alongside colonial rights of a settler society provide an additional driver for uptake of space‐to‐move initiatives. This article outlines a biophysical prioritization framework to support the development and roll out of space‐to‐move interventions in ways that work with the character, behavior, condition, and evolutionary trajectory (recovery potential) of each river system in Aotearoa. . Removal of meander bends narrows and simplifies river morphology, increasing the slope and energy of flow within the channel. Such practices, and the outcomes they engender, create path dependencies as they lock-in future management options as economic development and overcapitalization of the defended valley corridor skews the balance of costs-and-benefits in favor of maintaining and upgrading control practices over deteriorating ecological and cultural values (Holling & Meffe, 1996;Hutton et al., 2019;Tobin, 1995). Across the world, this orthodox logic and the underpinning social construction of values (Chan et al., 2016) is increasingly being challenged and tipped in favor of environmental rehabilitation. Innovative solutions that move beyond this pathology embrace proactive, precautionary, and preventative plans and restorative actions that address concerns for environmental protection and repair (Brierley & Fryirs, 2022). This is showcased, for example, by highprofile dam removal interventions in the USA (e.g., Foley et al., 2017;Poff & Hart, 2002) and the advent of space-tomove river management programs in Europe (Buijse et al., 2002;Piégay et al., 2005) and North America (Biron et al., 2014). This nascent paradigm shift reflects a growing consensus that confining dynamic rivers has negative and lasting ecosystem-scale impacts on biodiversity and may actually accentuate channel instability and elevate the risk of catastrophic flooding (e.g., Roni & Beechie, 2012). Recognizing that change is underway as part of Aotearoa New Zealand's first national adaptation plan (Ministry for the Environment, 2022), in this article we explore why contemporary river management practices in Aotearoa New Zealand continue to emphasize engineering solutions that prioritize flood protection to the detriment of more inclusive practices that address concerns for ecosystem and cultural values. In large measure, this situation reflects the funding of flood protection schemes through targeted rates levied on the adjacent landowners, who in turn benefit through developing the protected land and raising its economic value. This creates a feedback mechanism that is resistant to change and innovation. Associated with this "development", the cost of breaches to flood defenses has also increased. We contend that this approach to river management has promoted socially inequitable and environmentally and economically unsustainable outcomes. In relation to these concerns, we highlight two emerging transformations in Aotearoa New Zealand. The first lies in an urgent and deliberate reframing of perspectives and priorities that proactively asserts indigenous (M aori) values and conceptions of rivers as living, indivisible entities that have assured rights under a foundational national document, Te Tiriti o WaitangijThe Treaty of Waitangi Parsons et al., 2019;Ruru, 2018;Stewart-Harawira, 2020;Te Aho, 2019). Intriguingly and generatively, M aori notions of rivers as dynamic continua resonate with hydrogeomorphic perspectives that emphasize longitudinal and lateral connectivity (Brennan et al., 2019;Ward et al., 1999;Wohl et al., 2005Wohl et al., , 2015Wohl et al., , 2019 and disturbance (Richards et al., 2002;Stanford et al., 2005) as fundamental determinants of the ecological health and integrity of river systems Wilkinson et al., 2020). The second transformation recognizes growing awareness of an emerging crisis wherein attempts to stabilize highenergy rivers through confinement may inadvertently be manufacturing future disasters by accelerating aggradation and enhancing the risk of catastrophic breakout flooding . Rivers in Aotearoa New Zealand have some of the highest sediment yields per unit area in the world (e.g., Hicks et al., 2011;Walling & Webb, 1996). Increases in the frequency and magnitude of high flows under climate change will likely increase sediment delivery through confined reaches, accentuating management problems associated with elevated bed levels in bedload-dominated systems. This is especially evident on active alluvial fans that are sites of productive agricultural land and some urban settlements (Davies & McSaveney, 2006. This sets in train a race to the bottom that pitches increasingly unsustainable upgrades to defenses against rising river bed levels, amplifying elevation differences between the confined river corridor and its surrounding floodplain (Figure 1a). In this article, we present a simple biophysical prioritization framework that can be used to assess options for the uptake of space-to-move interventions under different physical, ecological, and social contexts. We contextualize this study with a brief overview of circumstances that created the strangled rivers of Aotearoa New Zealand and approaches to space-to-move interventions that have been developed and applied in other parts of the world. consent, and are not reflected in this total. These practices have differing impacts upon different types of river across the country, with variable consequences and prospects for recovery (discussed later; see Figure 1). The strangled rivers of Aotearoa New Zealand reflect institutional legacies and assertions of fixity of a colonial settler society that sought to control fluvial hazards, conceptualizing rivers in the service of society through measures that impose expectations of a familiar, liveable world in ways that reflect nostalgic memories of the mother country (Beattie & Morgan, 2017;Goodin, 2012). Engineers demonstrated considerable technical prowess in their efforts to train rivers into stable, hydraulically efficient conduits that are conceived to be more predictable and manageable (Knight, 2016). Short-term successes in this regard, however, led to long-term unsustainable outcomes in biophysical, socioeconomic, and cultural terms (Table 1; Arnaud et al., 2015;Bravard, 2010;Frings et al., 2019;Knox et al., 2022;Tena et al., 2020). Impacted systems have been referred to as zombie rivers (Mitchell & Williams, 2021)-rivers that are increasingly devoid of life and diversity. In many instances, managed channel forms and corridor width are out of balance with prevailing catchment conditions. Active and continuous intervention is required to maintain flood capacity, yet equivalent outcomes could be achieved by simply giving the river more space, and allowing channels to adjust. As disturbance-driven entities, naturally unconfined and free-flowing rivers continuously adjust their form to variations in the flow of water and sediment, and changes in local F I G U R E 1 Photomosaic showing geographic variability in anthropogenic river confinement for various strangled rivers of Aotearoa. The Tukituki River (image a and cross-section) siphons sediment from a large distributary system as part of a flood protection scheme. Bed aggradation between confining stopbanks leaves the river several meters above the surrounding floodplain. Image b shows the location of Greytown on the fan of the Waiohine River, highlighting the vulnerability to aggradation-induced stopbank breach. Stopbanks along the Ashburton River (image c) divorce the river from its ancestral floodplain, restricting the contemporary channel to a narrow fairway. The color scale on these images indicates the departure from a trend surface based on LiDAR data. Ashburton and Waiohine have a 10 m range (± 5 m) and Tukituki has a 2.5 m range (+2 m, À0.5 m). White lines indicate stopbank locations or regional base level (Brierley & Fryirs, 2022). Working against nature does not work (Fryirs & Brierley, 2021;Newson, 2021). Nature always reasserts itself, as channels seek to recapture the space that has been taken from them. Semantics matter in these deliberations. Floods-or any "hazards" for that matter-generate anthropocentric concern for the management of "risk" (e.g., protection of values pertaining to people, places, properties, etc.; see Weir et al., 2021). Yet the river is simply being a river. Despite the inherent logic, this thinking is largely absent in management responses to recent flood events in Aotearoa that resulted in significant damage to infrastructure, severing critical communication links and impacting heavily on riparian agricultural land (Mitchell & Williams, 2021;Williams, 2022). Subsequent calls to "put the river back" and reinstate and elevate protection measures reflect the dominant mindset which seeks to place rivers out of sight, out of mind (Knight, 2016). Such framings separate human societies from rivers, promoting a perception that rivers are over there-something to be managed by someone else. Prevailing socioeconomic relations to rivers that assert human authority and the imperative to maintain assets and infrastructure in the interests of "protecting" society limit the scope of future management options. Legacy effects and locked-in path dependencies are difficult and expensive to revoke, creating inequities for subsequent generations (Winter, 2020;Wohl, 2019). For at least three generations, confined and defended rivers have constituted an accepted baseline state (e.g., Pauly, 1995;Soga & Gaston, 2018). Over time, the assumption that this state is normal creates a perceptual, cognitive, and infrastructural lock-in that is self-reinforcing. Problems that ensue are likely to increase in response to climate change and further land use intensification on valley floors. The quest to reanimate the strangled rivers of Aotearoa New Zealand seeks a radical repositioning of conventional thinking and a rejuvenation of sociocultural connections to rivers. Embracing the environmental values of dynamically adjusting rivers acknowledges the need to understand hazards and mitigate associated risks. We contend, however, that solutions lie not in pinning channels in place, but rather by adapting societal uses of river corridors and, where possible, keeping assets out of harm's way. | THE GENERATIVE POTENTIAL OF SPACE-TO-MOVE PROGRAMS In various parts of the world, space-to-move and similar interventions address concerns for river health and flood risk by returning parts of the valley floor to rivers, even where anthropogenically enforced separation has endured for hundreds of years (e.g., Buijse et al., 2002;Formann et al., 2014;Formann & Habersack, 2007;Habersack & Piégay, 2007;Jungwirth et al., 2002;Piégay et al., 2005;Schmutz et al., 2016;Smith et al., 2017). These initiatives reconnect severed socio-cultural linkages with rivers, restoring psychological, and recreational associations (e.g., walking, fishing, swimming, boating) as well as biophysical connections (e.g., channel-floodplain linkages, riparian wetland function, soil regeneration, and ecological enhancement; Table 1). The concept of allowing space for flooding and movement of river channels, including initiatives to re-wild or re-nature rivers, is practiced under different auspices and names in different parts of the world [e.g., channel migration zone (Rapp & Abbe, 2003); erodible corridor (Piégay et al., 2005); fluvial territory (Ollero, 2010); river corridor (Kline & Cahoon, 2010); freedom space (Biron et al., 2014;Buffin-Bélanger et al., 2015)]. Such interventions seek to achieve a balance between the environmental benefits derived from allowing the river to flow freely and self-adjust within the river corridor, while maximizing public security and economic benefits by protecting property and infrastructure outside of the river corridor. Major engineering interventions as part of a Room for the River program have achieved considerable success in countries such as the Netherlands. For example, recent activities celebrated a comprehensive program of initiatives on the lower reaches of the Meuse River over the last 30 years (Van Looy & Kurstjens, 2021), and extensive redesign of a major bend of the Waal River at Nijmegen (e.g., Edelenbos et al., 2017;Schouten, 2016;Verweij et al., 2021; see Room for the River ProgrammejDutch Water Sector). Essentially, these nature-based solutions work with the river and allow for its variability (Albert et al., 2021;Fryirs & Brierley, 2021;Nesshöver et al., 2017;Newson, 2021). Sometimes these programs incorporate managed retreats, removing assets, and infrastructure from threatened areas (e.g., Mach & Siders, 2021;Moss et al., 2021;Wible, 2021). Opening up the river can have considerable on-site and off-site benefits, relieving the impacts of inundation in downstream reaches. Despite the long-standing recognition of the multiplicity of benefits of space-to-move interventions (Table 1), the uptake of such practices has, so far, been very limited in Aotearoa New Zealand. | CONTEMPORARY RIVER MANAGEMENT PRACTICES IN AOTEAROA NEW ZEALAND Although the history of systematic direct human manipulation of river systems in Aotearoa New Zealand is relatively short (Knight, 2016), profound anthropogenic impacts are evident across most of the country (Fuller & Rutherfurd, 2021). Given the late stage of the colonial settlement of Aotearoa New Zealand relative to other parts of the world, there are relatively few instances in which urban and industrial developments are located immediately adjacent to major rivers. Comprehensive river management schemes, undertaken under the auspices of Total Catchment Management, only started in the middle of the twentieth century (Memon & Kirk, 2012). Over time, however, land-use practices have sought to optimize agricultural expansion across lowland environments and ever higher into the catchment. Realignment of the drainage network through subsurface drains, drainage canals, and ditches has reduced natural attenuation and retention of storm flows, thereby generating exceedingly high peak flows to the mainstem channel. Enhanced continuity of flow to the lower system has altered the timing and intensity (thus inundation and erosivity) of flooding downstream. Alongside this, forest clearance and management practices in steepland terrain have increased delivery of debris flow and landslide materials into river systems (cf., work by Jakob et al. (2020) in British Columbia). Although flooding impacts are primarily restricted to local urban areas and agricultural lands, given the influence of primary industries upon the export-driven economy of Aotearoa New Zealand, these are significant socioeconomic and political concerns. Despite long-standing awareness of these impacts, pressures for land-use intensification on valley floors continue to further restrict rivers to this day (Mitchell & Williams, 2021). Contemporary river management practices lie in stark contrast to the world-leading emphasis on the importance of sustainability that underpinned the legal framework for the management of natural resources in Aotearoa New Zealand. The Resource Management Act (RMA), established in the early 1990s, defines the bed of a river as: "the space of land which the waters of the river cover at its fullest flow without overtopping its banks" (Resource Management Act 1991 No. 69 (as at 21 December 2021), Public Act 2 Interpretation-New Zealand Legislation). Associated assertions of river channels as "static, clearly, and cleanly delimited" entities are clearly inconsistent with both westernscience and M aori ontologies that acknowledge rivers as dynamically adjusting continua and in many cases, inherently messy. Doubling down on this paradox, the braided rivers of Aotearoa New Zealand, characterized by multiple, continuously shifting channels, and distinctive ecosystem values (Gray & Harding, 2007;Hicks et al., 2021), remain important as beacons of a wild, untamed landscape in the psyche of many New Zealanders, yet all-too-often they are managed in a narrow straight-jacket between stopbanks and engineered banks of willows . More problematically, this policy framing establishes and perpetuates undue confidence in the totality of flood protection. This, in turn, encourages further development, thereby raising land and property values and initiating an unending cycle that necessitates on-going investment in flood protection at all costs (cf., Donaldson, 2021). Ultimately, all flood defenses have limits, and incentivizing development on naturally vulnerable land serves only to engineer disasters into the future (Tobin, 1995). Eventually, regulating structures will fail or require prohibitively expensive maintenance. Commenting on a parallel situation in response to recent floods in British Columbia, Canada, geomorphologist Brett Eaton noted: This wasn't a natural disaster; this was an infrastructure disaster (Globe and Mail, 2021). Meanwhile to the indigenous people on whose land the flood occurred, the real disaster took place a hundred years ago when Sumas Lake was drained by settlers to create the agricultural Sumas "Prairie" (https://globalnews.ca/news/8385289/sumas-lakereflection-first-nations/). Legacy effects and the memory of what is gone before constrain contemporary and future options. Such realities exemplify the importance of long-term indigenous knowledge as an integral part of climate change adaptation programs (Caretta et al., 2022). We contend that contemporary management practices in Aotearoa New Zealand put people and infrastructure on a collision course with rivers. Unfortunately, the contemporary governance system that uses targeted rates to fund the costs of land drainage and flood protection neglects the wider value that is provided by rivers and their riparian corridors, including ecosystem and cultural services (Table 1). Strangled rivers that create the conditions for future disasters reflect a lack of proactive and precautionary planning, limiting prospects to adapt to a changing climate that will include more frequent and more severe high flows (Arnell & Gosling, 2016;Slater et al., 2021;Tellman et al., 2021;Winsemius et al., 2016). Reliance on conventional approaches to flood management, and associated instruments that leverage the past as a guide to future behavior, are hazardous ways to prepare for increasingly uncertain futures (e.g., Sofia & Nikolopoulos, 2020;Tonkin et al., 2019). Today's "extreme" events will become the new normal while new levels of severity loom in the future. As more assets and infrastructure lie within harm's way, the potential for disastrous effects can only increase. How effectively can existing infrastructure handle multiple hazards, or concatenations of events? Crawford-Flett et al. (2022), for example, estimate that up to 80% of stopbanks in the Christchurch area may be prone to liquefaction following seismic events. Despite increasingly acknowledged obligations under the Te Tiriti o WaitangijThe Treaty of Waitangi, consecutive governments in Aotearoa New Zealand have consistently failed to change management mindsets and practices that conflict directly with Indigenous (M aori) relations to rivers. In light of prospects for reframed management practices, what do just and equitable approaches to adaptation look like, and how do they work? | CULTURE CLASH: M AORI CONCEPTUALIZATIONS OF RIVERS AS LIVING ENTITIES Historically, legal conceptualizations of rivers in Aotearoa New Zealand have failed to incorporate indigenous (M aori) relations to river systems (Harmsworth et al., 2016;Hikuroa et al., 2021;McAllister et al., 2019;Stewart-Harawira, 2020; Te Aho, 2019). Colonization and industrialization ruptured traditional M aori relationships to freshwater bodies and overlooked customary practices tied to M aori knowledge, values, and ethics (Stewart-Harawira, 2020). Parsons et al. (2019) show how successive generations of government policies and actions directed with a specific goal and underpinned by dominant social values created a profoundly path-dependent system of managing rivers. For example, Coombes (2000) noted that evidence presented in both the Tribunal hearings and the earlier compensation court cases for the Waipaoa River flood scheme near Gisborne documents M aori alienation from use of river systems, referring to stopbanks as boundaries and obstacles. Dispossession of lands and waters also deprived M aori of their rights and inherent responsibilities to enact traditional customary practices of kaitiakitanga or stewardship of the natural environment, disrupting long-standing expressions of deep relationality through genealogical connection (Stewart-Harawira, 2020). Notions of management have a very different meaning when framed in relation to M aori ways of knowing and living with (being a part of) rivers (Ko au te awa, ko te awa ko au; I am the river and the river is me; Rangiwaiata Rangitihi Tahuparae in Wilson, 2010). Rather than envisaging and striving to achieve a particular state over a given timeframe through particular management goals, ancestral connections to rivers emphasize an ongoing commitment to a duty of care, living sustainably with, and as a part of, the river . Resonant scientific themes that emphasize complexity, connectivity, contingency, and emergence are remarkably consonant with Indigenous knowledge or m atauranga M aori Hikuroa, 2017;Wilkinson et al., 2020). M aori value rivers as holistic entities that are more ancient and more powerful than people, with lives and rights of their own (Ruru, 2018;Te Aho, 2019). In these relational ways of knowing and being, land, forests, rivers, and oceans are simultaneously considered ancient kin, revered elders, and living entities (Salmond, 2014). Such relationships are expressed as whakapapa, a noun or verb that expresses a complex web of privileges and obligations that incorporate ancestral relations through concerns for descent, lineage, connections, identity, and so forth in space, through time. Rights, and the obligations that come with them, are derived from collective relationships, inherited from ancestors, including direct relationships and responsibilities to land and waterways. Claims for mana tupuna-authority deriving from ancestors-are expressed through discharging obligations to care for land and waterways. This can, in turn, confer the privileges of mana whenua-the authority to make collective decisions in the use and care of land-and mana moana-the authority to make collective decisions in the use and care of water. Concerns for tikanga-the customary system of values and practices that have developed over time and are deeply embedded in their social context-reflect and express a respectful, relational and emergent lens-an ethical way of being that openly acknowledges the rights of nature and the rights of the river (Ruru, 2018). In these framings, an holistic lens incorporates respect for the mauri (lifeforce) and the mana (authority) of each river, innately conceptualized as indivisible entities at the catchment scale-Ki Uta Ki Tai (From the Mountains to the Sea). Concerns for ora (collective health and wellbeing, for the river, the society, and the environment), embrace respectful ways of living with the diversity, morphodynamics, and evolutionary traits of every river (cf., a state of mate, disrepair; Hikuroa et al., 2021). Understandings of and concerns for reciprocity, interdependence, and co-evolutionary relations recognize that what's good for the river is good for society, and vice versa, as inherently these are parts of the same thing (Salmond, 2017). In the parlance of management, this can perhaps be expressed as managing for, and managing with, giving primacy to integrity and societal relationships to rivers, not the rivers themselves. How can we assume to manage rivers that are more ancient and powerful than us? Viewed through a M aori lens, separating humans from rivers through stopbank construction severs deep-seated ancestral relations to rivers, disrespecting regard for taniwha (supernatural beings that may be considered highly respected kaitiaki (protective guardians) of people) and taonga (often translated as treasure, but better understood in the active sense, that is to be treasured or relational, what do you treasure?). A M aori lens reframes the managerial and engineering question "how much space does a river need?" into a much deeper relational question, "how can we live with the river as a living, indivisible entity?" Seeing ourselves as part of the river recognizes that damaging it inescapably damages us. It is only in recent years, largely associated with the Treaty of Waitangi tribunal processes (Harmsworth et al., 2016), that the quest to redress indigenous dispossession and marginalization of M aori values has gathered momentum in Aotearoa New Zealand (Memon & Kirk, 2012;Paterson-Shallard et al., 2020). Increasingly, efforts to break path dependencies incorporate formal recognition of M aori governance, values, and knowledge within policies, translating M aori values into tangible actions that seek to destabilize Western commandand-control approaches to flood risk management (Harmsworth et al., 2016;Parsons et al., 2019). Biophysical imperatives to reanimate the strangled rivers of Aotearoa New Zealand synchronize directly with M aori conceptualization of rivers as living entities. In this regard, the pan-tribal flora and fauna claim (WAI 262) is pivotal to any discussions pertaining to the impacts of stopbanks. Responding to this claim, The Waitangi Tribunal in its report, Ko Aotearoa Tenei (This is New Zealand; Waitangi Tribunal, 2011) states that "Kaitiakitanga is the obligation, arising from the kin relationship, to nurture or care for a person or thing… Kaitiaki can be spiritual guardians existing in nonhuman form… But people can (indeed, must) also be kaitiaki… Mana and kaitiakitanga go together as right and responsibility, and that kaitiaki responsibility can be understood not only as cultural principle but as a system of law." Failure to embrace the potential of space-to-move interventions as a basis to address concerns for strangled rivers reflects an abjuration of guarantees made in Te Tiriti o Waitangi. In accordance with Treaty obligations, M aori are rightsholders rather than stakeholders. Implementation of policies and plans by local authorities in Aotearoa New Zealand, under the auspices of the RMA, are required to give effect to the National Policy Statement for Freshwater Management 2020 (NPSFM; building on previous manifestations in 2014 and 2017). This policy document explicitly recognizes the need to acknowledge Te Mana o te Wai, expressed as the innate relationship between the health and well-being of the water and the wider environment and their ability to support each other while sustaining the health and well-being of people (Te Aho, 2019). Conceptualizations of rivers as living entities to whom society has distinct (shared) responsibilities provide a readybuilt interpretive framework for catchment-specific applications . As formally acknowledged in the Engineering NZ Climate Change Position Paper (2021), working in alignment with the principles of te ao M aori and m atauranga M aori is required to proactively support a just transition in climate change adaptation programs and to embrace a sustainability lens. Such practices "work in harmony with the environment, actively enhancing the mana of te taiao as well as mitigating and minimising harm" (p. 5). Perhaps inevitably, the management perspective is the real laggard, entrenching historic power relationships and in so doing, failing to deliver proactive and precautionary practices that meet obligations to M aori under the terms of the Treaty of Waitangi. Here we ask: What will it take to give effect to Te Mana o te Wai, enhancing our collective role as guardians (kaitiakitanga) in efforts to revitalize, reanimate and resuscitate the strangled rivers of Aotearoa? We specifically consider two aspects of this aspiration. First, we develop a biophysical prioritization framework that appraises prospects to design and implement space-to-move interventions in differing situations. Second, we assess various policy implications of such programs, highlighting some of the issues to be addressed in scoping prospects to bring about a transformation in practice. | A GEOMORPHIC PERSPECTIVE ON SPACE-TO-MOVE INTERVENTIONS IN AOTEAROA NEW ZEALAND Geomorphologists have established clear understanding of the factors that affect the ways rivers appear, work, and evolve (Kasprak et al., 2016). Geomorphology is not a linear, cause-and-effect science (Grant et al., 2013). Inherent uncertainties accompany understandings of rivers as nonlinear, contingent, and emergent entities, wherein universal principles play out in distinctive ways in any given catchment (e.g., Brierley et al., 2013;Phillips, 2007). While such complexity poses challenges to conventional management approaches, it sits comfortably alongside M aori interpretations of relationships to rivers and the importance of catchment-specific knowledge. In the deeply contextual relationships that underpin whakapapa, assertions of generic understandings and seeking universal truths have no place. In scientific terms, generating catchment-specific geomorphic knowledge is now a relatively straightforward task, facilitated by readily available high-precision topographic and Earth observational data and a wide range of analytical tools (e.g., automated machine learning applications and modeling toolkits; see Boothroyd et al., 2021;Fryirs et al., 2019;Piégay et al., 2020;Reichstein et al., 2019). In a New Zealand context, this is supported by impressive longterm national-scale datasets and toolkits such as the River Environment Classification (REC) and Freshwater Environments of New Zealand (FRENZ) (Snelder et al., 2004). However, to date, remarkably few studies document systematic catchment-wide appraisals of river evolution, explaining forms, and rates of adjustment to inform predictions of prospective river futures (Downs & Piégay, 2019;cf., Walley et al., 2020). Carefully selected archetypal histories conducted in different landscape settings for rivers subject to differing forms of anthropogenic disturbance would be very helpful in efforts to address this shortcoming. Much work remains to be done in developing a shared understanding of each river's story, appraising what efforts to work with the river look like and how to operationalize such understandings. In scientific terms, regional LiDAR coverage (LINZ, local councils) and capacity to systematically characterize and explain the recent history of local/reach scale river adjustments in relation to catchment-specific attributes and connectivity relationships highlights the potential to "change the game". Introduction of bathymetric LiDAR will further enhance these prospects, supporting monitoring programs that will increasingly reveal catchment-specific relations between headwater change and downstream responses, as well as links to land-use change, storm (cyclonic) events, tectonics, riparian vegetation change, and so forth. In Table 2, we present a geomorphologically informed approach to support the development and implementation of space-to-move interventions in Aotearoa New Zealand. Biophysical considerations underpin prospects to reanimate strangled rivers as they help to determine what is realistically achievable in any given instance. Context is everything (Kondolf & Yang, 2008). In large part, this reflects the extent to which anthropogenic activities have modified and constrained a reach (and associated use of the land that has been set aside), and catchment-scale considerations that determine recovery potential . Geographic and historical considerations such as geomorphic setting and anthropogenic imprint (settlement, assets, infrastructure, and critical lifelines) fashion the vulnerability of a given river system. The uneven distribution of stopbanks across the country (Crawford-Flett et al., 2022) reflects different types and severity of problems (Figure 1). In regions such as Southland, for example, extensive use has been made of stopbanks that are offset some distance from the contemporary channel, allowing the river sufficient space-to-move such that it retains a good degree of geo-ecohydrological functionality. As outlined below, this is not the typical situation, and contextual considerations vary markedly across the country. Effective management practices and associated policy framings recognize explicitly distinctive river morphologies and associated behavioral and evolutionary traits. Threats presented by geo-eco-hydrological impacts of strangled rivers, and prospects to address them, vary markedly for different types of river and with scale ( Figure 1, Boxes 1-3). T A B L E 2 A geomorphologically informed pathway to support the development and uptake of space-to-move interventions in Aotearoa New Zealand Articulate distinctive values in light of catchment-specific contextual considerations Develop and share understandings of each river's story, compiling and disseminating technical (geomorphic, ecological, hydrological, etc.), m atauranga, and social understandings of the meanings of a living river at a given location-its character and behavior, distinctive values, contemporary condition, relationships to (interdependence upon) other reaches (i.e., pattern of reaches, reach-reach connectivity, and tributary-trunk stream relations). # Articulate what constitutes success in proactive, precautionary, and realistic catchment-scale visions, carefully contextualizing opportunities relative to limitations and risks of inaction (unsustainable practices that maintain the status quo). Determine what is desirable/achievable, assessing sense of loss induced by stopbanks (e.g., cultural values, fishing, swimming, habitat, etc.). Assess what needs to be done to protect and/or enhance distinctive values/attributes (things that matter), including concerns for taonga (treasures), taniwha, and ancestral relations. Communicate aims, aspirations, and benefits of the proposed plan of action, clearly identifying its purpose/rationale and the supporting evidence base. Appraise evolutionary trajectory to interpret future prospects Interpret evolutionary trajectory to determine controls upon forms and rates of river adjustment, unraveling cumulative impacts and path dependencies set by legacy effects. Interpret how changing boundary conditions and connectivity relationships, and associated stressors and limiting factors, impact upon flow/sediment regimes, and the recovery potential of the system . Scope the future to assess what is realistically achievable (what is manageable). Assess how differing forms of anthropogenic modifications and riparian vegetation and wood impact upon river form and flow/sediment conveyance. # Co-develop a pathway to implementation Identify and seek to address challenges, obstacles, impediments, roadblocks, and pinch points to implementation. Strategically address threatening processes, pressures, and stressors, giving particular attention to thresholds of potential concern. Minimize prospects for catastrophic change wherever and whenever possible. Manage land use problems at source and at scale , striving to ensure that management practices "do not fight the site" (Brierley & Fryirs, 2009Fryirs & Brierley, 2021). Negotiate trade-offs and prioritize actions at the catchment scale, carefully considering treatment responses that minimize negative off-site impacts and legacy effects (i.e., do not transfer problems elsewhere; Schmidt et al., 1998). # Co-develop risk maps and assess impacts of differing management strategies upon biophysical, socioeconomic, and cultural attributes, guiding interpretations of where differing forms of managed retreat may be possible. Differentiate reaches to retain as is (to protect assets/infrastructure) from reaches in which local measures are possible (e.g., reoccupation of abandoned secondary channels and oxbow lakes) and reaches that have genuine prospect for managed retreat and/or stopbank removal. # Carefully consider use of archetypes for differing types of interventions in differing situations/circumstances. Prioritize trial applications, working first in instances with high recovery potential to demonstrate proof of concept in situations that have high likelihood of success. # Monitor the effectiveness of trial applications, including concerns for local values (e.g., tohu; sentinels, signals, acute observations of change; appropriate measures of the physical habitat mosaic and morphodynamics (functionality)), remembering that pretreatment data are vital. Modify practices and adapt behaviors based on learnings along the way. # Communicate findings. Roll out and scale-up applications appropriately. BOX 1 An example of high-priority prospects for space-to-move interventions in biophysical terms High-priority reaches for uptake of space-to-move interventions in biophysical terms have good potential to achieve tangible, clearly identified, and measurable benefits over a definable timeframe. In order for a river to self-heal, it must have sufficient room to move and sufficient energy (stream power) to rework available sediments, such that an appropriate sediment load is able to establish a suitable level of heterogeneity (i.e., channel complexity, and morphodynamic links to floodplains that shape the physical habitat mosaic of the river; Choné & Biron, 2016;Kondolf, 2011). Kondolf (2011) contends that habitat diversity is reduced if rivers are too dynamic, while low-energy, low-sediment load rivers may have limited prospects for self-recovery following channelization. We consider the Rangitata River on the Canterbury Plains as an example of a high-priority prospect for uptake of space-to-move interventions in biophysical terms. Iconic braided rivers in Aotearoa New Zealand are cherished for their aesthetic beauty and dynamics. Sediment reworking and recurrent channel adjustment regenerate rich, complex river habitats that support a high biodiversity (Gray et al., 2006;O'Donnell et al., 2016). Rangitata River, Canterbury Plains 1937 imagery credited to Orianne Etter, for Forest & Bird, imagery sourced from http://retrolens and licensed by LINZ; 2013 imagery was captured for "Environment Canterbury" by Aerial Surveys Ltd, Unit A1, 8 Saturn Place, Albany, 0632, New Zealand. 2019 imagery contains data sourced from Canterbury Maps and partners licensed for reuse under CC BY 4.0. Reach location is indicated in Figure 1. Geomorphic setting The glacially fed Rangitata River drains the eastern Southern Alps (2865 m). Its catchment area as it exits from gorges onto the floodplain is around 1500 km 2 . Frequent freshes and floods generated by headwater rainfall combine with abundant sediment derived from alpine processes to create one of New Zealand's iconic braided rivers. Floods tend to occur in December or January. The mean annual flood is 1350 m 3 /s. The river is largely naturally unconfined across the lower Canterbury Plains, with some Late Pleistocene terrace and bedrock confinement across the upper plains. The bed gradient of the Rangitata across its alluvial plain (below Arundel) is Sophisticated understandings of biophysical process interactions, alongside readily available datasets in Aotearoa, make explanation of the capacity for river adjustment and likely range of variability, and prediction of prospective river futures, a relatively straightforward task, at least in conceptual terms and in a statistical sense. Key differences are evident, for example, when considering braided rivers relative to wandering gravel-bed, active meandering or passive meandering rivers (see, e.g., Brierley & Fryirs, 2022). Alongside this, appropriate measures for each and every river carefully consider the problem of scale. Larger rivers typically adjust over longer timescales, often with a significant memory of past river adjustments. Accordingly, impacts of past management practices, and prospects to address them, vary with position (and scale) along a given river. 6.2 m/km which is steeper than the nearby Rakaia and Waimakariri floodplains. The Rangitata River is competent to convey gravel all the way to the coast. Reach-scale anthropogenic constraints Between Arundel and State Highway 1, the river naturally bifurcated into two main channels, the North and South Branches, separated by Rangitata Island. The 1937 image indicates the complex and diverse assemblage of bars, islands, and channels associated with the large range in river dynamics. Exotic riparian plantings in buffer strips, combined with some stopbanks and rock groynes, artificially constrain the river, allowing agricultural intensification along much of the river's length across the Canterbury Plains. Stopbanks constructed on the true right bank of the channel are designed to prevent any flow in the South Branch except under severe flood conditions. The former active channel bed of the South Branch has been converted into agricultural use (notably dairy intensification). Although the river maintains an active braided form (2013 image), this modification has reduced the capacity of the river to adjust and occupy channel anabranches, re-work bars and form or maintain islands. Incorporation of former islands into the adjacent floodplain (2013 image) has removed important refugia for ground-nesting birds (O'Donnell et al., 2016). Despite imposition of hard and soft management infrastructure, a flood with a peak discharge of 2270 m 3 s À1 on 7 December 2019 breached flood defenses in several places on the south bank, occupying the South Branch and other former braids (2019 image). This damaged roads, powerlines, the railway line, irrigation machines, and farmland. The peak discharge was close to a 1/10 AEP event. Impacts were accentuated as this event closely followed double peaks of around 1000 m 3 s À1 on 3 and 5 December which already overtopped the banks in places. Catchment-scale recovery potential The artificially constrained portion of the river has a high recovery potential. Allowing the river to permanently occupy the south branch, and/or setting back willow buffers to allow more room for the main stem, would support natural braided river processes and allow island refugia to re-form (O'Donnell et al., 2016). An abundant supply of bedload-caliber material from the Southern Alps, coupled with frequent flows competent to mobilize bed material, provides a good prospect for rapid reoccupation of former channel courses and a widening of the braidplain. This would facilitate geo-ecological recovery of river functionality, reinvigorating the diversity of geomorphic units, and habitats. The high value of intensive agricultural land presents a barrier to allowing the Rangitata more space to move. However, changing the funding mechanism for river management away from one reliant on adjacent landowners, and including ecological and cultural values in any cost-benefit analysis, could help to facilitate the uptake of space to move interventions. Opportunity costs associated with climate change adaptation include expenses that can be avoided through reduced impacts of future disasters while minimizing maintenance costs of path dependencies. Cost-effective programs minimize prospects that destructive and expensive pathways are set in train for rivers that presently remain relatively unrestrained. The cost of repair far outweighs the costs of proactive, preventative approaches in the management of these remnants (high conservation value) reaches. Collective socioeconomic and cultural gains would be achieved if the river is seen as less of a threat, restoring its mauri, mana, and ora. BOX 2 An example of low-priority prospects for space-to-move interventions Highly impacted reaches, sometimes referred to as sacrificial rivers (Bouleau, 2014), typically have limited prospects for uptake of space-to-move interventions as they have been subjected to irreversible change. Concerns for protection of high-priority assets over-ride all else. Implicitly, once infrastructure is in place, it becomes increasingly expensive to implement space-to-move programs as path dependencies limit the range of viable options in the future. Lower Hutt In geomorphic terms, optimal prospects to reanimate strangled rivers are situations that have been subject to a lesser degree of anthropogenic constraint and have high recovery potential (see Box 1). Reaches that are subject to low-intensity land uses (e.g., no urban development or vineyards or horticulture) and low-impact (older or patchy) stopbanks are considered as high-priority situations for space-to-move interventions. In these instances, relatively small initiatives may engender significant and sustained improvement in river conditions. Trial applications of space to move interventions in carefully targeted locations offer the greatest prospect for successful interventions, building confidence in the effectiveness and benefits of low-cost passive restoration practices that leave the river alone as far as practicable (e.g., Fryirs et al., 2018;Kondolf, 2011;Poeppl et al., 2020). Geomorphic setting Hutt River (655 km 2 ) drains the southern Tararua Ranges (1376 m; located in Figure 1). The short, steep catchment is set within a tectonically controlled valley in which Late Pleistocene terraces, fault scarps, and valley margins exert considerable confinement. The river is less confined in its most distal (downstream) reach where it flows through Lower Hutt City, although the northern valley margin and Wellington Fault scarp confine the right bank. The Hutt River is a short, steep river that is cobbly for most of its length and is competent to convey gravel within $2 km of the coast. Reach-scale anthropogenic constraints The LiDAR-derived DEM indicates that historically the Lower Hutt River was a dynamically adjusting system with a significant range of habitat diversity. Pool-riffle sequences, point bar assemblages, and channelfloodplain connectivity would have been characteristic attributes of the sinuous 1905 channel-most likely a wandering gravel-bed river. Distinct cut-offs and palaeochannels would have supported wetland habitats in former times. The greatly suppressed range of river behavior indicated in the 1941 image is accentuated to an even greater degree in the 2016 image. Profound habitat loss accompanied imposition of a laterally constrained low sinuosity channel that now operates as a series of alternating bars, disconnected from its floodplain. The narrow channel corridor is marked by truncated lateral bars as the channel bounces from side to side between armored banks. Bends are unable to develop, let alone form cutoffs. Channel rationalization and simplification have reduced resistance and improved conveyance of sediment and discharge through the urbanized reach, amplifying stream power, and propensity for erosion. Rigorous maintenance of the riparian margin is required to ensure security of flood protection infrastructure to protect urban development (Lower Hutt and Upper Hutt cities). Recurrent repairs to substantial stopbanks, rock lining, tied tree groynes, and willow-planting are needed to fix every minor breach. Catchment-scale recovery potential Pronounced path dependencies present little choice but to leave the river in its current location, pinned between urban development and the valley margin. However, future developments on adjacent lands are considered unwise, as disasters are inevitable in the future. In biophysical terms, efforts to improve river conditions are now limited to minor habitat enhancement. A disregard for the ancestral whakapapa connection has resulted in an overwhelming sense of loss, often expressed in terms of degraded mauri (cf., Hikuroa et al., 2018). However, interventions that reconnect urban areas to their rivers as parkland corridors can enhance aesthetic and sociocultural relationships. Prospectively, the aim of kaitiakitanga (guardianship) will restore mauri, revitalizing ancestral connections, enhancing amenity values, and the vibrancy of a living river system. RiverLink (https://www.riverlink.co.nz/) is a partnership between Hutt City Council, Greater Wellington Regional Council, and Waka Kotahi NZ Transport Agency working together with Ng ati Toa Rangatira and Taranaki Wh anui ki te Upoko o te Ika to deliver flood protection, revitalize urban areas as a river city and enhance community connectivity via cycleways and pathways. Some managed retreats may be feasible to improve flood capacity and reduce risk of catastrophic infrastructure failure. This section of river is specifically included as a case study in the National Adaptation Plan (Ministry for the Environment, 2022, p. 130). BOX 3 Moderate (often highly contested) prospects for space-to-move interventions Instances between examples shown in Boxes 1 and 2 have notable potential for rehabilitation, but question marks remain over either prospects or approach. Typically, existing land uses and infrastructure, and associated land ownership issues inhibit the availability of space to support managed retreat initiatives. Otaki River, Greater Wellington Geomorphic setting The short, steep catchment of the Otaki River (348 km 2 ) drains the western Tararua Ranges (1529 m) prior to the river crossing a narrow coastal plain (reach is located in Figure 1). The channel is largely naturally unconfined in its most distal (downstream) reach, but upper reaches of the coastal plain where the river exits the range-front are laterally confined by Late Pleistocene river terraces . Within the ranges, the river is bedrock confined, comprising gorges, and pockets of alluvium. The Otaki River is competent to convey gravel to the coast where a mixed sand-gravel bar forms across the river mouth. Reach-scale anthropogenic constraints Historically, the Otaki River had a high dynamic range of variability. Multiple channel anabranches and dynamic, migrating bends created a rich assemblage of geomorphic units, including medial, lateral and point bars, islands, and oxbows (see . Channel rationalization and simplification have reduced resistance and improved conveyance of sediment and discharge to the coast, enhancing flood protection for Otaki township and development of areas previously occupied by the active channel bed and floodplain. These economic benefits have come at significant social-ecological cost. Loss of river diversity and geo-ecological functionality has been accompanied by environmental loss (solastalgia) that has impacted iwi connections to their river Moore, 2004). Maintenance of the riparian margin is required to ensure security of flood protection infrastructure. This includes substantial stopbanks, rock lining, tied tree groynes, and willow-planting. Catchment-scale recovery potential Although the downstream portion of the Otaki River has been highly modified, it has significant recovery potential. An abundant supply of bedload-caliber material from the ranges, coupled with frequent flows competent to mobilize this material, provides a good prospect for rapid reoccupation of the Otaki's former channel courses. The river has a high propensity for lateral change and reworking of its bed and adjacent gravelly floodplain. Although maintenance and enhancement of north bank stopbanks is required to protect Otaki township, removal of the south bank infrastructure would allow substantial recovery of the river and sufficient room to accommodate an enhanced range of flows and channel mobility. This strategy would improve floodplain connection, enhancing river habitat in wetlands and abandoned channels. Reanimating the Otaki as a living river would enhance its ora and further improve its mauri (life force) and mana (authority). Successful first steps could encourage and promote more ambitious initiatives in the future. A progressive sequence of actions can be envisaged, starting with initiatives to enhance riparian functionality (the geoecohydrological template of the river). Building on this, local allocation of additional space-to-move along the river corridor could enhance lateral (re)connection, reduce hydraulic efficiency (longitudinal connectivity) and ensure that vertical connectivity is not significantly altered (unless it is desirable to do so in a given instance; e.g., Wöhling et al., 2020). Thankfully, design criteria are increasingly in-hand to support such rewilding ventures (e.g., Ciotti et al., 2021;Wheaton et al., 2019). Appropriate management practices are fit-for-purpose, working with the river both individually and collectively (Brierley & Fryirs, 2022). Carefully targeted interventions for larger braided rivers, with their characteristic heterogeneity and biodiversity, are likely to have a high payoff for all the measures of impact/success indicated in Table 1. Alongside this, multiple measures that address issues at source, typically in riparian areas of small upland streams, enhance prospects to achieve a high payoff in downstream areas. Barriers to uptake of space-to-move interventions are especially pronounced in highly impacted areas, such as modified streams and rivers in towns and cities. Legacy effects set by past anthropogenic disturbance determine the degree of strangulation and the extent to which path dependencies are set-in-stone (or rip-rap/rock armor, or pipes!). It is extremely difficult to envisage a return to former circumstances in instances where significant land use development now occupies that space that has been put-aside. In these low-priority situations in terms of prospective uptake of high-cost space-to-move interventions, the best must be made of prevailing conditions (Box 2). Ongoing expenditure to increase flood protection in line with the changing climate is easily justified in these instances. For in-between, mid-priority instances, space-to-move interventions may have significant prospect to facilitate environmental repair, but these sites are likely to be the most contested sites in terms of land use relationships (Box 3). What some may see as options to reserve or retreat may be seen by others as land that is available for development. Proactive approaches to river management recognize the importance of diminishing returns in step-by-step approaches to the prioritization, design, and implementation of space-to-move interventions. Non-linear relations may become evident in negotiating compromise solutions between infrastructure protection and allowing a river space to move. Local allocations of small additional spaces for a river to adjust (say, a fraction of the active channel width) may achieve limited outcomes in terms of environmental benefits and hazard reduction relative to systematically allowing for half to full additional active width (or more) over a substantive length of river. Smaller, incremental additions may also engender limited socio-cultural benefits, continuing to perceive the river "over there, to be managed by someone else". F I G U R E 2 Schematic representation of risks and consequences associated with the uptake of space-to-move interventions in differing situations. Contextual circumstances determine opportunities for different forms of space-to-move intervention, reflecting what is achievable on the one hand, and associated consequences and risks on the other (Boxes 1-3). Highly constrained situations, typically associated with concerns for asset and infrastructure protection, have limited scope for uptake of space-to-move practices, although riparian zone enhancement and local improvement to the availability and viability of physical habitat may be possible (shown in red). The yellow zone refers to situations where a staged, incremental approach enhances space/capacity for adjustment and longitudinal/lateral reconnection of biophysical processes (and habitat linkages). These instances are often subject to contested land use pressures. The green zone represents situations with considerable prospect for uptake of space-to-move interventions, facilitating morphodynamic adjustment with low risk and low consequence in economic terms, but significant benefit in biophysical and socio-cultural terms (i.e., re-instigation of the mauri, mana, and ora of a living river). Pushing issues aside will make it even harder to address key issues in the future, perpetuating further reactive responses as part of disaster management practices. Further work is required to systematically appraise prospects for uptake of space-to-move interventions in light of contrasting circumstances exemplified in Boxes 1, 2, and 3. Management of risk varies markedly for these different situations (Figure 2). Risks and severity of consequences increase in highly developed situations. Accordingly, maintenance of the status quo and perhaps even accentuation of existing practices (e.g., enlarged and reinforced stopbanks) may be required to cope with future challenges. In less developed situations, space-tomove interventions may alleviate and mitigate various risks, but they cannot be eliminated. Conversely, without sufficient space-to-move, progressive deterioration in river conditions is likely. In intermediary situations, an engaged, informed, deliberative approach to negotiation of trade-offs is recommended (see next section). Prospects for uptake of space-to-move interventions are inherently fashioned by the policy context in which they are designed and applied. | PROSPECTS TO REANIMATE THE STRANGLED RIVERS OF AOTEAROA: SITUATING SCIENTIFIC INTERVENTIONS IN THEIR POLICY CONTEXT So as the image of tomorrow becomes clearer and more certain, a purely reactive approach to climate impacts becomes ever less credible. Instead, we need to plan and we need to prepare. For too long we have pushed climate adaptation to the back of the cupboard. Now is the time for a real step-change in our approach. Because the sooner we start, the more effective our efforts will be.-Hon James Shaw, Message from the Minister of Climate Change, August 2022, Urutau, ka taurikura: Kia t u pakari a Aotearoa i ng a huringa ahuarangi. Adapt and thrive: Building a climate-resilient New Zealand. Aotearoa New Zealand's First National Adaptation Plan, p. 6. Sooner or later the conversation (about managed retreat) will have to happen.-Jamie Cleine, Buller District Council Mayor, commenting on the recommendation that an extensive system of stopbanks and floodwalls be built to ringfence Westport on the South Island following devastating impacts of flooding in mid-July, 2021 (cited in Donaldson, 2021, p. 23). As highlighted by Castree (2019), the public authority of scientists lies in their ability to generate answers to cognitive questions, not normative ones, while politicians, businesses, and citizens select solutions to address what are perceived to be environmental problems (see Bluwstein et al., 2021;Lahsen & Turnhout, 2021). While river management is largely a response to political expedience and societal acceptability, science does play a fundamental role in helping to set the parameters and forecasting the impacts of socially acceptable actions in a given river system, as well as planning cost-effective ways to go about it. Researchers are not passive agents in this process. Recognizing that uptake of science is enhanced when it can point to publicly acceptable and feasible solutions, there is cause for optimism in considering prospects to address concerns for the strangled rivers of Aotearoa New Zealand (Williams, 2022). Recurrent flood disasters and concerns for the health of freshwater systems keep issues high on the public radar (Gluckman et al., 2017; State of Environment reports; Joy & Canning, 2020;Richards et al., 2021). The need for enhanced adaptive capacity to cope with extreme events in the face of climate and land use change already has immediacy and high salience, resonating across society (Mitchell & Williams, 2021). Unfortunately, the perceived flood protection provided to rate payers by stopbanks and associated infrastructure may not reflect reality, and risk will increase with climate change. Furthermore, the funding of flood defense through targeted rates and the focus this puts on purely economic cost-benefit considerations needs further analysis as a cause of the problem, highlighting the inability of such measures to fix it. Ultimately, the cost of repair far outweighs the costs of preventative approaches. Recognizing the imperative for economically viable options, from a landowner perspective much depends upon what the initial cost would be and whether such expenditure is viable at the time. In many instances, opportunities to avoid the cost and pain of reactive measures are still in-hand: Allow some productive land to rewild and rethink transport planning, even if it means selected road and rail routes will no longer take the most efficient (and cheapest) path. Costs for uptake of space-to-move interventions, or profits foregone as the case may be, must be weighed against the future losses of land taken from the river and the assets built upon it. Further case studies are required to quantify implications of managed retreat, appraising benefit-cost analyses of allowing a river to self-adjust relative to alternative interventions, clearly articulating prospects to reduce impacts of future disasters (cf., Buffin-Bélanger et al., 2015;Hanna et al., 2020Hanna et al., , 2021. A strong collective will to do something about these problems recognizes that unless the current trajectory is changed, the situation is only going to get worse into the future. This awareness enhances prospects to socialize the problem before contemplating solutions, sharing perspectives to clearly articulate the full range of perspectives on the issue. Facilitation of deliberative fora and workshops to inform planning options will further enhance public recognition and ownership of the problem, identifying what needs to change and how practitioners and publics might work together to tackle it. Identifying who has the mandate and who needs to be involved in the process of change through institutional and stakeholder mapping and engagement are critical starting points for such endeavors. A coalition of aligned practitioners and engaged citizens who want to do something about it could identify tangible ways to collectively drive change through formal partnerships to identify points of intervention with policy-and decision-makers. In such endeavors, it pays to make it as easy as possible to take action, maximizing the political incentive to do so. To this end, a wellplanned deliberative process is required to support collaborative problem structuring, joint fact-finding, and iterating solutions, building public understanding to motivate action. For the science community, this means using clear messaging to make the issue as readily comprehensible as possible, making every effort to reduce inherent complexities such that solutions to address problems are practicable and tractable. Uptake of managed retreat and space-to-move interventions entails much more than scientific rationales and benefit-cost analyses. Ultimately, it is a question of political will. Transformative change in the face of prevailing path dependencies is highly contingent and requires social catalysis (Acevedo Guerrero, 2018). While moving out of harm's way saves lives and ultimately, money, it remains a politically unpalatable solution (cf., Goodell, 2018). This is despite scientifically robust concerns for proactive and precautionary plans that emphasize the imperative for adaptation before the crisis hits. Pre-emptive interventions require enabling statutes. At present, the statutory system in Aotearoa New Zealand fails to readily enable space-to-move initiatives. A forthcoming overhaul of the Resource Management Act and its replacement with new legislation that includes a Climate Change Adaptation Act creates opportunities to incorporate space-to-move interventions and strategic (managed) retreat from land at risk of catastrophic and outbreak flooding within the planning framework (Donaldson, 2021, p. 22). Although these policy advances are encouraging, as yet no clear guidance has been provided to indicate what role compensations will play and how any work/compensations will be paid. Inevitably, the insurance industry will reposition itself too, as flood risk maps are revised and reappraised. A new generation of policy tools and legal instruments such as conservation covenants will be required to expropriate lands in the built environment, whether by consensus or decree. It is likely that escalating costs driven by climate change will place increasing strain on targeted rate models in the future. Ultimately, these costs can only be met by central (or a wider form of regional) government, rather than local authorities. Potentially, concern for management of river hazards could be incorporated within a well-resourced (rates-based) body such as the Earthquake Commission. In alignment with the recently announced national adaptation plan (Ministry for the Environment, 2022), we contend that this article provides initial guidance into some of the considerations to be addressed in taking steps to prioritize rivers and types of response in assessing prospects to design and implement space-to-move interventions to reanimate the strangled rivers of Aotearoa. In stark contrast to the emphasis upon fixity of a colonizing settler society (Goodin, 2012), space-to-move interventions promote adaptivity, envisaging rivers as living, adjusting, and emergent entities. In situations where managed retreat is the only viable long-term option, how can a flexible and dynamic adaptive pathways approach to negotiate the planned withdrawal of communities and infrastructure away from threatened areas be developed and applied (Donaldson, 2021; see Ministry for the Environment, 2022, p. 143)? A policy logic that frames the problem and possible solutions in a different way requires public support to facilitate a transformation in river management practices. Moving beyond undue reliance upon technological solutions to fix rivers, regenerative approaches emphasize concerns for how society lives with (as a part of) living rivers, rather than the increasingly futile quest to control them . Collaborative co-design processes that work with community stakeholders from across the social, political, and economic spectrum offer the greatest prospect for transformative change. Prospectively, the principles outlined in Table 2 and the examples shown in Boxes 1-3 could support initial steps in an incremental, staged approach to interventions that evaluate and perhaps classify priorities in a given catchment and region. More broadly, however, how can leadership be envisaged and enacted as a distributed, collaborative process that incorporates collective engagement and ownership through catchment guardianship? Conducted effectively, river rehabilitation is a socially and culturally regenerative process that enhances collective wellbeing (ora). Outwardly, constructive alignment of scientific principles with Te Mana o te Wai Hikuroa et al., 2021;Te Aho, 2019) indicates a genuine prospect to design and implement space-to-move interventions that will reanimate the strangled rivers of Aotearoa New Zealand. Such programs would express and realize a different interpretation of the public values of river systems. Prospects to steer a different and potentially difficult course, while maintaining social consensus, require the establishment of a common or shared understanding of the problem from a wide range of perspectives (Harmsworth et al., 2016;Parsons et al., 2021). Development and use of Living Databases is required to facilitate, promote and use such understandings to support transformed approaches to the management of living rivers. Integrating plural perspectives in programs that work with nature and embrace ancestral connections are policy imperatives in enhanced approaches to sustainability and biodiversity management across the world (e.g., Díaz et al., 2015;Hill et al., 2020;cf., Smith et al., 2016). Ultimately, it pays to listen and learn from the river itself in effort to live generatively with it . Jo Hoyle https://orcid.org/0000-0001-7963-920X Richard Measures https://orcid.org/0000-0002-9746-886X RELATED WIREs ARTICLES Developing and using geomorphic condition assessments for river rehabilitation planning, implementation and monitoring Assessing the geomorphic recovery potential of rivers: Forecasting future trajectories of adjustment for use in management Water infrastructure: A terrain for studying nonhuman agency, power relations, and socioeconomic change Troubled waters: Maori values and ethics for freshwater management and New Zealand's fresh water crisis Applications of Google Earth Engine in fluvial geomorphology for detecting river channel change
v3-fos-license
2018-04-03T04:56:23.841Z
2015-09-21T00:00:00.000
14293502
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/drp/2015/701489.pdf", "pdf_hash": "c0fd5bfaa3c844b5a3c56fbd2a3dd7a7774c69c3", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:95", "s2fieldsofstudy": [ "Medicine" ], "sha1": "266a9c0daeda944d50a0647687de2e03aefcdab6", "year": 2015 }
pes2o/s2orc
Study of the Etiological Causes of Toe Web Space Lesions in Cairo, Egypt Background. The etiology of foot intertrigo is varied. Several pathogens and skin conditions might play a role in toe web space lesions. Objective. To identify the possible etiological causes of toe web space lesions. Methods. 100 Egyptian patients were enrolled in this study (72 females and 28 males). Their ages ranged from 18 to 79 years. For every patient, detailed history taking, general and skin examinations, and investigations including Wood's light examination, skin scraping for potassium hydroxide test, skin swabs for bacterial isolation, and skin biopsy all were done. Results. Among the 100 patients, positive Wood's light fluorescence was observed in 24 and positive bacterial growth was observed in 85. With skin biopsy, 52 patients showed features characteristic for eczema, 25 showed features characteristic for fungus, 19 showed features characteristic for callosity, and 3 showed features characteristic for wart while in only 1 patient the features were characteristic for lichen planus. Conclusion. Toe web space lesions are caused by different etiological factors. The most common was interdigital eczema (52%) followed by fungal infection (25%). We suggest that patients who do not respond to antifungals should be reexamined for another primary or secondary dermatologic condition that may resemble interdigital fungal infection. Introduction Toe web intertrigo may present as a relatively asymptomatic, mild scaling but it also can be seen as a painful, exudative, macerated, erosive, inflammatory process which is sometimes malodorous [1]. Foot intertrigo is mostly caused by dermatophytes and yeasts and less frequently by Gram-negative and Grampositive bacteria. Gram-negative infection is relatively common and may represent a secondary infection of tinea pedis. With time, a "complex" may develop in the setting of moisture and maceration that contains multiple fungal and bacterial organisms [2,3]. Frequently, these interdigital lesions are often diagnosed as tinea pedis or eczematous dermatitis. However, in some patients, the macerated eruption is unresponsive to treatment with antifungal agents or antiinflammatory agents such as topical steroids [4]. Other less common conditions may also affect the web space such as erythrasma [5]. Because the texture of soft corn is macerated, it may be misdiagnosed clinically as a mycotic infection of the interdigital space [6]. Interdigital psoriasis ("white psoriasis" or "psoriasis alba") is a distinct but atypical form of psoriasis that is often missed as it is commonly mistaken for interdigital fungal infection [7]. A report of Bowen's disease [8] and a case of Verrucous carcinoma in the third and fourth toe web space which is presented by intractable intertrigo has been reported [9]. Also, malignant melanoma in the interdigital space has been diagnosed [10]. It is apparent that several pathogens and factors might play a role in toe web space lesions. Therefore, clinical and 2 Dermatology Research and Practice microbiologic studies are suggested to assist in the selection of appropriate treatment and the prevention of important complications [3]. In this study, we aimed to determine the different etiological causes of pedal web space lesions. Patients and Methods This study included 100 Egyptian patients living in Cairo who presented at Dermatology Departments of Al-Azhar University Hospitals with toe web space lesions. The study was approved by the Al-Azhar University Medical Ethics Committee, and written prior informed consent was obtained from every participant. Patients who have received systemic and/or topical antifungal and/or antibiotic treatments in the past 6 weeks were excluded. Every patient was subjected to the following: (1) full history taking; (2) general and local examinations; (3) Wood's light examination for possible fluorescence; (4) skin scraping for direct potassium hydroxide (KOH) test: scraping of lesions was done with a blunt scalpel then a drop of 20% KOH/40% dimethyl sulfoxide (DMSO) mixture was added to the specimen (DMSO increases sensitivity of the preparation and softens keratin more quickly than KOH alone in the absence of heat [11]); the slide was then examined for fungus using low power field ×10 then high power field ×40; (5) skin swabs for bacterial isolation: cotton swabs were taken from the lesions and incubated for 24 hours at 37 ∘ C in blood and MacConkey (BIOTEC Lab. Ltd., UK) and nutrient (Oxoid Ltd., England) agars; the bacterial cultures were considered negative if there was no growth after 48 hours of incubation; the suspected colony was picked up for morphological and biochemical reactions for their identifications; (6) a 3 mm punch biopsy was taken from the lesion. Specimens were fixed in 10% formalin then sectioned and stained with haematoxylin and eosin stain (as a routine) and PAS stain (to highlight fungal elements). All involved toe web spaces were examined with Wood's light while KOH test, skin swabs for bacterial isolation, and skin biopsy were done from the most affected web space showing intensive erythema and desquamation (almost the 4th web space). Wood's light examination aimed to show the characteristic coral red fluorescence with erythrasma [12]. The goal of mycological workup in this study was to demonstrate just the presence of fungi (regardless of its nature) as an etiologic cause of web space lesions; so we thought that KOH test (supported by PAS-stained tissue sections) will suffice this aim. The recognition of fungal organisms as dermatophyte, mould, or yeast by KOH is presumptive although it is highly probable [13]. We relied upon the WHO guidelines on standard operating procedures for microbiology [14]. These guidelines document that dermatophyte has regular, small hyphae (2-3 m) with some branching, sometimes with rectangular arthrospores. Candida has hyphae/pseudohyphae (with distinct points of constriction) with budding yeast forms. Aspergillus species had hyphae that are usually small (3-6 m) and regular in size, dichotomously branching at 45-degree angles with distinct cross septa. The histopathological examination aimed to correlate with and confirm the clinical diagnosis. A diagnosis of lichen planus is established when the biopsy showed the following criteria: dense, band-like infiltrate of lymphocytes that is strictly confined to the subepidermal area. The lymphocytes attack and destroy the basic part of the epidermis, giving rise to characteristic sawtooth like rete ridges. The cell infiltrate in the dermis consists of lymphocytes with a mixture of some mast cells and macrophages; there is also a variable amount of melanin pigment which has leaked from the injured epidermis. There are no plasma cells or eosinophils [15]. In biopsies stained with PAS, fungal elements are highlighted, with hyphae and spores stained red. Dermatophytes produce septate hyphae and arthrospores [15]. Biopsy from patients with eczema demonstrates that epidermis shows moderate to marked acanthosis and hyper/ parakeratosis. There may be areas of inter-and intracellular edema and rarely scattered small vesicles. The inflammatory cell infiltrates mainly consist of lymphocytes. Edema in the dermis is not prominent. Sometimes the pattern is psoriasislike and shows long, slender rete ridges and papillae, which are covered by thin epidermis and contain thin-walled dilated venules filled with erythrocytes [15]. Acanthosis, papillomatosis, and hyperkeratosis are observed in biopsies from patients with warts, with confluence of the epidermal ridges in the centre of the lesion and koilocytes [16]. A thickened compact stratum corneum with slight cupshaped depression of the underlying epidermis is seen in patients with callosity. The granular layer, in contrast to corn, may be thickened. There may be some parakeratosis overlying the dermal papillae, but much less than in a corn [17]. Demographic and Clinical Findings. Of 100 patients with toe web space lesions, 72 were females and 28 were males. Their ages ranged from 18 to 79 years (mean 43.94 years; SD ±14.94). Regarding the presenting symptoms, 64 patients complained of pruritus while only 2 complained of pain and 34 were asymptomatic. The 4th web space was the commonest space affected in both right and left feet, followed by 3rd space, 2nd space, and 1st space (79%, 62%, 33%, and 7% versus 76%, 63%, 30%, and 8%, resp.) ( Table 1). Of the 100 patients, 8 patients had lesion in 1 web space, 21 had lesion in 2 web spaces, 26 had lesion in 3 web spaces, 17 had lesion in 4 web spaces, 6 had lesion in 5 web spaces, 18 had lesion in 6 web spaces, 1 had lesion in 7 toe web spaces, and 3 had lesion in 8 web spaces. Wood's Light Examination. Among the 100 patients, positive Wood's light fluorescence was observed in 24; of them, 20 were females and 4 were males. All revealed coral red fluorescence characteristic for erythrasma. Microbiological Findings. Positive KOH results were noticed in 66 patients; 50 were females and 16 were males. 57 KOH mounts were highly typical of yeast species (yeast Table 3). Most of the patients showed more than one etiological factor for the intertrigo. 50 cases presented with 2 diseases, 38 cases presented with 3 diseases, 9 cases presented with 4 diseases, and only 3 patients showed a single disease. Positive Wood's light fluorescence (erythrasma) was associated with 14 KOH-positive cases, 15 cases with Gram-positive cocci, 6 cases with Gram-negative bacilli, 12 cases with biopsy proven eczema, 8 cases with biopsy proven fungus, and 3 cases with biopsy proven callosity. Gram-positive cocci were associated with 36 KOH-positive cases, 15 positive Wood's light fluorescence (erythrasma) cases, 10 biopsy proven fungi, 26 eczema cases, 6 callosities, and 1 wart. Gram-negative bacilli were associated with 22 KOH-positive cases, 6 positive Wood's light fluorescence (erythrasma) cases, 11 biopsy proven fungi, 20 eczema cases, 7 callosities, and 1 lichen planus. Discussion Foot intertrigo may present as a chronic erythematous desquamative eruption. This is often diagnosed as tinea pedis or eczematous dermatitis. However, in some patients, the macerated eruption is unresponsive to treatment with antifungals or anti-inflammatory agents [4]. Therefore, clinical and microbiological studies are suggested to assist in the selection of appropriate treatment and the prevention of important complications [3]. This study was planned to verify the etiological causes of toe web space lesions in randomly [18,19]. In our study, 72 (72%) were females; the majority of them were housewives. Household work including kitchen work, duties for cleaning, washing, caring for children and other domestic activities, and shopping may explain the increased incidence of microbial intertrigo especially tinea pedis in this sector of population. On the contrary, Ahmad et al. [19] in Pakistan reported higher rate in males (56.7%). They attributed this to wearing closed shoes most of the time in hot and humid climate. In our patients, the 4th (lateral) toe web space was the most commonly affected in both right (79%) and left (76%) foot. This is in agreement with several reports [18][19][20]. This could be related to anatomical considerations (potentially occluded space). Many authors refer to web space infection as tinea pedis or "foot ringworm" and some consider it to be purely dermatophytes induced [21][22][23]. However, other studies have demonstrated that recovery of dermatophytes from macerated webs is low. Generally, it ranges from 7.5% to 61% [1,[24][25][26]. The lower incidence of dermatophyte infection in these studies may be explained by the fact that, in mixed intertrigo, bacterial production of methanethiol and other sulfur compounds can lead to inhibition of dermatophytes [1]. Depending on KOH mount, this study showed that tinea pedis was observed in 66 (66%) patients; 50 of them were females (75.76%) and 16 were males (24.24%). This agrees with Morales-Trujillo et al. [5] who declared that fungi were positive in 62.5% of 70 cases. Using KOH, Ahmad et al. [19] out of 118 cases reported higher rate of positivity where 90% had positive direct microscopy while only 50.8% had positive cultures. On the other hand, Pau et al. [27] in a study on 1568 patients reported lower rate of tinea pedis infection (14.79%). This disagreement may be due to difference in life styles (e.g., type of shoes worn), weather conditions such as humidity, and the varying number of cases in each study. Cases with erythrasma can be diagnosed by positive Wood's lamp examination and/or Gram staining/culture [28]. In the present study, positive coral red fluorescence with Wood's light, which is characteristic for erythrasma, was found in 24% of the patients. The prevalence of erythrasma varies greatly from one study to another. Sariguzel et al. [29] reported a prevalence of 19.6% in 121 patients with interdigital foot lesions. Allen et al. [30] reported that, in 300 patients, 18.7% were determined to have erythrasma. Morales-Trujillo et al. [5] examined 73 patients, of whom 24 (32.8%) were diagnosed with erythrasma. Inci et al. [28] concluded that the rate of erythrasma was found to be 46.7% among 122 patients with interdigital foot lesions. Svejgaard et al. [31] reported a prevalence of 51.3%, (prior to military service) and 77.1% (reexamined at the end of military service) in a group of Danish military recruits. This discrepancy may be attributed to the type of population studied, environmental conditions such as heat and humidity that increased the risk for developing erythrasma, and the methods used in diagnosis, for example, Wood's light examination, culture, and/or direct microscopy with Gram staining. In Polat andİlhan [32] study, using only Wood's lamp examination or Gram's staining resulted in 31 (42.5%) or 14 (19.2%) positive patients, respectively. Using Wood's lamp examination and Gram's staining concurrently resulted in 28 positive patients (38.4%). Interestingly, Inci et al. [28] reported no growth for C. minutissimum in bacteriological cultures from all patients with interdigital lesions. However, they found that using only Wood's lamp examination or Gram staining resulted in 11 (9%) and 19 positive patients (15.6%), respectively, whereas using both Wood's lamp examination and Gram staining concurrently resulted in 27 positive patients (22.1%). This suggests that bacteriological cultures have no or limited role in diagnosing erythrasma. In this study, interdigital erythrasma was demonstrated more in females (83.3%) than males (16.7%). This finding agrees with Morales-Trujillo et al. [5] who stated that interdigital erythrasma was more common in women than men 6 Dermatology Research and Practice (83.3% versus 16.7%). Also, Polat andİlhan [32] stated that most of their patients with erythrasma were women (56.2%). The exact cause of this female predominance is unknown but may be attributed to occupational factors including household duties and exposure to more heat and humidity. The interdigital space is typically colonized by polymicrobial flora. Dermatophytes may damage the stratum corneum and produce substances with antibiotic properties. Gramnegative bacteria may resist antibiotic-like substances and proliferate. This process may progress to Gram-negative foot intertrigo [4]. In this study, no combined presence of Grampositive cocci and Gram-negative bacilli was detected in any case. Although the lack of this association is unreasonable, it could not be exactly explained. Gram-positive cocci were associated with 36 KOH and 10 biopsy proven fungi. Gram-negative bacilli were associated with 22 KOH and 11 biopsy proven fungi. Gram-negative bacterial infection may represent a secondary invasion of web space lesions. With time, in the presence of local humidity and maceration, other Gram-positive bacterial (and fungal) organisms may proliferate. So, the web space infection may represent a single or polymicrobial etiology. Twenty-eight females were suffering from Gram-negative bacterial web space lesion versus only 10 males with female to male ratio 2.8 : 1. This disagrees with Aste et al. [2] who reported that Gram-negative interweb foot infection represents a male-to-female ratio of 4 : 1. Also Lin et al. [4] reported that foot bacterial intertrigo was more common in men (82%). The more prevalence in male gender can be related to the more frequent use of closed shoes for occupational and nonoccupational activities such as during the practice of sports. Generally speaking, the routine diagnosis of interdigital lesions depends mainly upon history and clinical appearance with or without the need for direct microscopy of a KOH preparation, bacterial cultures, and/or Wood's light examination while web space biopsy is not routinely used. Yet this study declared the additional value of web space histopathology with clinical correlation in definitive diagnosis of interdigital lesions. Despite the great clinical similarities, after clinicopathological correlations, we have diagnosed a lot of cases of interdigital eczema (52% of cases). These were clinically manifested as pruritic macerated glazed skin usually limited to the interdigital space. PAS-positive cases (interdigital fungal intertrigo) which constituted 25% of cases were clinically presented as pruritic, wet, and macerated interdigital space which usually extended to the plantar and/or dorsal surface of the foot. Some cases were associated with fungal infection of other parts of the body. Cases of callosity of toe web space (19%) were characterized by a well-defined white plaque limited to the web space. Three cases of interdigital warts were diagnosed (3%): two cases were asymptomatic while the third case was complaining of painful lesion. All of the 3 cases were associated with planter warts. In this study, we found a healthy 49-year-old female clinically presented with pruritic interdigital lesions of 3 weeks' duration. Examination revealed bilateral and symmetrical whitish ill-defined plaques in the second and the third web spaces of both feet. Biopsy was characteristic for lichen planus. This is a very rare site for lichen planus. We think that it is important to recall such underestimated variant of lichen planus and other uncommon dermatoses in this site. Conclusion Many cases of web space lesions can be overdiagnosed, underdiagnosed, or misdiagnosed. These may be caused by different conditions including eczema, fungal intertrigo, erythrasma, callosity, wart, or even lichen planus. Although fungal foot infection is common, we suggest that patients who do not respond to topical and/or systemic antifungal therapy should be reexamined for another primary or secondary dermatologic condition that may resemble pedal fungal intertrigo. The diagnostic procedures in this work can be complementary to each other and can be used as an investigative workup tailored to individual patients who have resistant to treat or have atypical interdigital lesions.
v3-fos-license
2018-12-12T22:28:28.784Z
2017-10-19T00:00:00.000
55583055
{ "extfieldsofstudy": [ "Environmental Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLII-3-W3/157/2017/isprs-archives-XLII-3-W3-157-2017.pdf", "pdf_hash": "95e50f7bd3fe57b980ce37ac542e11c3ad3e45d7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:97", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "95e50f7bd3fe57b980ce37ac542e11c3ad3e45d7", "year": 2017 }
pes2o/s2orc
TREE STEM AND CANOPY BIOMASS ESTIMATES FROM TERRESTRIAL LASER SCANNING DATA In this study an automatic method for estimating both the tree stem and the tree canopy biomass is presented. The point cloud tree extraction techniques operate on TLS data and models the biomass using the estimated stem and canopy volume as independent variables. The regression model fit error is of the order of less than 5 kg, which gives a relative model error of about 5 % for the stem estimate and 10–15 % for the spruce and pine canopy biomass estimates. The canopy biomass estimate was improved by separating the models by tree species which indicates that the method is allometry dependent and that the regression models need to be recomputed for different areas with different climate and different vegetation. INTRODUCTION In recent years there has been an increased interest in using terrestrial laser scanning (TLS) as a tool in forest inventory (Liang et al., 2016).The emphasis has been to estimate forest variables such as stem diameter at breast height and tree height (Olofsson et al., 2014) as well as modelling the stem profile (Thies et al., 2004, Henning and Radtke, 2006, Maas et al., 2008, Liang et al., 2014, Mengesha et al., 2015, Olofsson and Holmgren, 2016) and branches (Raumonen et al., 2013). The assesment of above ground biomass of trees is essential when evaluating tree populations in forests (Olschofsky et al., 2016).Therefore there is a need to automate the measurements.There are a number of studies that have shown that the estimate of tree biomass can be improved using TLS compared to traditional biomass models (Yu et al., 2013, Kankare et al., 2013, Hauglin et al., 2013, Calders et al., 2015).The techniques could for instance be used to estimate biomass in densely stocked young tree plantations (Seidel et al., 2013) or when modelling the tree biomass change (Srinivasan et al., 2014). In this study an automatic method for estimating both the tree stem and the tree canopy biomass is presented.The point cloud tree extraction techniques operate on TLS data and models the biomass using the estimated stem and canopy volume as independent variables. Field Data The field trees were sampled from the Flakaträsk site in northern Sweden (64 • 16'13.53"N, 18 • 29'52.59"E), Table 1.Eight spruce and eight pine trees were cut down and the biomass of the stems, branches and needles were measured in the field (Goude, 2016).The biomass of the branches and needles were combined to a canopy biomass class. The terrestrial laser instrument used in the measurements was a Trimble TX8 with a field of view: 360 • × 317 • , beam divergence 0.34 mrad, 1 million laser points per second and wavelength 1.5 µm (Near-IR).A multiscan setup was used with three instrument positions surrounding each tree, Figure 1. Stem and Canopy Volume Extraction Algorithms The stems of the trees were detected and modelled using a TLS single tree extraction algorithm (Olofsson and Holmgren, 2016) which is based on the idea that stems are approximately smooth and shaped like cylinders.A voxel-based model was used where small patches of the stem surfaces were extracted using eigen decomposition of the laser point cloud (Olofsson and Holmgren, 2016). The stem surface patches were connected and the center of the stem was estimated by calculating the curvature of the stem surface.All points that belong to stem surface patches pointing to the same center were classified as stem points, Figure 2,3.All connected stem points were used to fit cylinders as models of part (Olofsson and Holmgren, 2016).Stem cylinders positioned above each other were connected to stems.In this way a stem curve was extracted using cylinders with decreasing radius as a function of height. Figure 2. A laser scanned point cloud of part of a tree.The data observations that were classified as stem points by the filter algorithm are coloured green. Once the cylinder models of the stems were calculated, the volume of the stems were estimated by the modelled stem cylinders.The top part of the tree, where the single tree detection algorithm was unable to detect stem cylinders, was modelled as a cone reaching the highest registered laser point in the canopy. The horizontal positions of the detected stems were used when extracting the canopies of the trees.All laser points within the search radius 1.5 m from a stem was designated to a tree.The closest stem was chosen if a point was within the search radius of several trees. The point clouds of each tree were subdivided into three classes: stem, canopy and understory, Figure 4.The stem points were classified by the TLS single tree extraction algorithm (Olofsson and Holmgren, 2016).The understory points were classified as all non stem points below 2 m.The remaining points were classified as canopy. The tree crowns were modelled as circles surrounding the detected stems, Figure 4.The laser points classified as canopy was used in the estimate of the crown The average laser point distance to the estimated stem line was used as an estimate of the canopy radius at each 0.5 meter height interval.The canopy volume was estimated as the sum of the volume of the conical frustrums connecting two consecutive crown circles, Equation 1. where V f rustrum is the volume of a conical frustrum, r1 and r2 are the radii of two consecutive circles and h is the height interval between the two circles. Biomass Models Models of the tree stem and canopy biomass were estimated using linear regression, Equation 2, where M bio is the biomass of the stem or canopy (branches and needles), V tls is the TLS estimated volume of the stem or canopy, k is the slope of the line, and m is the y-axis intersection of the line. The model fit of the regression lines were evaluated using the root mean square error RMSE, Equation 3, where xi is the modeled biomass of tree number i, yi is the observed biomass of tree number i and n is the number of trees. RESULTS AND VALIDATION The field data and the stem and canopy biomass linear models are shown in Figure 5 and Figure 6.The estimate of the canopy biomass is improved if the models are separated by tree species, Figure 6, Table 2.This is probably due to the fact that spruce canopies are more dense than pine canopies.This means that more accurate models for above ground biomass could be developed if tree species are detected automatically. The regression model fit error is of the order of less than 5 kg, which gives a relative model error of about 5 % for the stem estimate and 10-15 % for the spruce and pine canopy biomass estimates, Table 2.These values are comparable to other studies where for instance (Yu et al., 2013) received a RMSE of 12.5 % for stem biomass and (Hauglin et al., 2013) retrieved an above ground biomass of 12.9 % and 11.9 % overall accuracy for Scots Pine and Norway spruce respectively.(Calders et al., 2015) retrieved a RMSE of 9.7 % for 65 Eucalyptus leucoxylon, microcarpa and tricarpa using multi-scan TLS. The fact that the canopy biomass estimate was improved by separating the models by tree species indicates that the method is allometry dependent and that the regression models need to be recomputed for different areas with different climate and different vegetation.This should however be possible to do for a number of regions in each country.The models for stem biomass seems to be less dependent on tree species. Figure 1 . Figure 1.Example of a point cloud from a laser scanned forest plot.Shading occur behind the trees and under the measuring instrument.The number of shaded areas can be reduced if several instrument positions are used. Figure 3 . Figure 3.A laser scanned point cloud of two trees standing close together.The data observations that were classified as non stem points by the filter algorithm are coloured gray.The data points classified as stem points are used when cylinder fitting stem segments. Figure 4 . Figure 4. Terrestrial laser scanner data classified into tree stem (blue), understory (brown) and canopy (red).The radii of the light blue circles are the average distances from the stem of the canopy points at each height interval. Figure 5 . Figure 5.A linear model of stem biomass using TLS estimated volume as independent Table 1 . The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLII-3/W3, 2017Frontiers in Spectral imaging and 3D Technologies for Geospatial Solutions, 25-27 October 2017, Jyväskylä, Finland Stem diameter at breast height and tree height for the field trees of the stems Table 2 . Tree biomass model parameters and RMSE for the model fit, Equations (2,3).
v3-fos-license
2018-12-11T20:05:09.222Z
2018-01-01T00:00:00.000
55086079
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2018/11/epjconf_ilrc28_02019.pdf", "pdf_hash": "f597a2d42700cda31e3e50e0959176232e3cb781", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:98", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "f597a2d42700cda31e3e50e0959176232e3cb781", "year": 2018 }
pes2o/s2orc
AEROSOL OPTICAL PROPERTIES RETRIEVED FROM THE FUTURE SPACE LIDAR MISSION ADM-AEOLUS The ADM-Aeolus mission, to be launched by end of 2017, will enable the retrieval of aerosol optical properties (extinction and backscatter coefficients essentially) for different atmospheric conditions. A newly developed feature finder (FF) algorithm enabling the detection of aerosol and cloud targets in the atmospheric scene has been implemented. Retrievals of aerosol properties at a better horizontal resolution based on the feature finder groups have shown an improvement mainly on the backscatter coefficient compared to the common 90 km product. INTRODUCTION The space-based Doppler mission AEOLUS of the European Space Agency expected to be launched towards end 2017 will be the first High-Spectral Resolution Lidar (HSRL) in space.Operating in the UV the HSRL capability is implemented by directing the backscattered light through 3 successive interferometers.The sensor behind the first one is dedicated to sensing particulate backscatter, its signal is called "Mie signal" and the sum of signals from the other two sensors is called "Rayleigh signal".These signals both contain contributions from molecular and particulate backscatter that are later seperated by computing a "cross talk correction".The system is primarily designed for the measurement of winds, but the HSRL capability enables the measurement of the particulate backscatter and extinction coefficients without any a priori assumption on the aerosol type.The level-2A (L2A) processor has been developed to retrieve vertical profiles of optical properties in 24 bins from the ground up to 30 km with a vertical resolution varying from 250 m to 2 km.The extinction coefficient is then derived through an iterative scheme while the backscatter coefficient is directly derived from the ratio of the pure particulate to molecular signals (more details can be found in [2]).The ability of the L2A to retrieve clouds and aerosols properties over a 90-km horizontal scale has already been demonstrated.A good retrieval of the overall shape of aerosol and cloud layers (see Fig. 1) with a better accuracy on the backscatter coefficient was shown in [1]. Figure 1: Vertical profile of L2A backscatter coefficients retrieval (red line) compared to the true expected profile (grey) and estimated retrieval errors (horizontal black lines). 90 km horizontal scale product In order to improve the horizontal resolution of the retrieval and provide an aerosol product at a finer scale than 90 km, a feature finder algorithm has been developed.It identifies homo- geneous aerosol and cloud targets on which the lidar signal is accumulated in order to obtain a sufficient signal to noise ratio (SNR). METHODOLOGY The feature finder developed for ADM-Aeolus is largely inspired by the EarthCare algorithm following the study of [3].The algorithm is based on the assumption that both clear-sky and cloudy or aerosol loaded Mie signals as well as background noise follow a Gaussian distribution.If we consider a given signal level as a lower threshold, the part of the clear sky signal distribution above the threshold (red area in Fig. 2) is a false-alarm probability, i.e. the probability that the signal level in a clear sky pixel actually exceeds the threshold.The part of the particle-loaded distribution that falls below the threshold (dark green area in Fig. 2) is the probability of missing detection.The probability of detection on the Mie channel P Mie , i.e the bright green area, can be expressed by: with S the signal level in a particle contaminated bin, δ S the noise level, and er f c the complementary error function. The expression of the SNR is then used to simplify equation 1: into: The complementary er f c function being monotonic, applying a threshold on P Mie is the same as applying a threshold on the Mie SNR.From a database of simulated scenarios, the SNR value enabling a high detection score has been determined. Feature Finder performance An End-to-End simulator (E2S) developed by ESA to simulate the lidar signals (Rayleigh and Mie channels) received by ADM-Aeolus has been used to simulate simple to more complex atmospheric scenes: standard boundary layer aerosols, additional thin and opaque clouds, real scenes observed during the LITE (Lidar Inspace Technology Experiment) experiment.By comparing the aerosol/cloud mask computed by the feature finder to the "true" E2S mask (input particulate backscatter coefficient larger than 10-6 m −1 sr −1 ), the expected accuracy of the detection has been evaluated.Figure 3 shows the feature finder performance on two LITE scenarios: the top panel shows a complex case with thin high clouds and middle level opaque clouds attenuating most of the signal (LITE C)whereas the bottom panel shows a simple case with thin cirrus clouds on top of boundary layer aerosols (LITE A).Particle contaminated pixels well detected by the feature finder are shown in yellow, false alarm in light green, missing aerosol in blue and clearsky pixels in dark blue.White pixels represent pixels for which a cloud detection is not possible due to the low transmittance of the lidar signal through the upper clouds (two-way transmittance smaller than 0.1).Taking into account the whole scene, a good detection of 72 % with a false alarm rate of 4 % was obtained for LITE C. The good detection rate is increased up to 83 % if we only consider the atmosphere above the boundary layer in which the lidar signal is probably too noisy to reliably retrieve optical properties at a small scale.For LITE A, a good detection of 45 % is shown for the whole scene but this rate is increased to 86 % when considering the upper part of the atmosphere (above 3 km) corresponding to the cirrus clouds.The developed FF algorithm is thus able to detect high level aerosol and clouds with a very high degree of good detection.However, the detection of boundary layer aerosol is still challenging due to the low noise level. Group retrievals In order to evaluate the capability of the L2A processor to retrieve aerosol properties at a finer horizontal scale compared to the 90 km accumulation length, aerosol retrievals have been performed after averaging lidar signals on small groups identified by the feature finder.These retrievals are then compared to the true E2S properties averaged on the same horizontal features.In order to emphasize the improvement in aerosol properties brought by an increased horizontal resolution, the retrievals at a 90 km horizontal length scale are also compared to the E2S averaged at the same length as used for the feature finder retrievals.We then compute the following ratio: where aer FF represents the aerosol retrieval (extinction or backscatter) accumulating the lidar signal on features detected by the algorithm, aer E2S,FF represents the aerosol properties used in the E2S after accumulation on feature de- tected by the algorithm and aer 90km the aerosol retrievals performed by accumulating the lidar signal over 90 km. Figure 4 shows this ratio for the complex LITE C scenario.Values above zero represent an improvement of the retrievals when using a smaller horizontal length, around zero similar performance is found, below zero the 90 km products leads to more consistent results with the E2S inputs.Both for the extinction and the backscatter coefficients, only few pixels show a degradation of the aerosol product if the lidar signal is accumulated on a finer scale.These pixels probably correspond to a strong attenuation of the lidar signal but this needs to be confirmed with a thorough evaluation.This first result demonstrates the robustness of the algorithm used to retrieve aerosol properties.For very long group, the performance is equivalent to the 90 km product as expected but a significant improvement is shown for the backscatter coefficient on small features below targets not attenuating too much the signal. Figure 2 : Figure 2: Probability of detection with the distribution for clear sky as a red line and the distribution for the particle-loaded signal as the green area.Figure extracted from the CALIOP ATBD described in [4]. Figure 3 : Figure 3: Feature Finder performance of good detection (yellow pixels), false alarm (green pixels), no detection (light blue) for the LITE C scenario with complex heterogeneous clouds (top panel) and the LITE A scenario with homogeneous high thin clouds (bottom panel). Figure 4 : 4 CONCLUSIONS Figure 4: Ratio of the retrieval errors at a small horizontal scale to the retrieval errors at a 90 km accumulation length for the extinction coefficient (top panel) and backscatter coefficient (bottom panel).A logarithm scale is used
v3-fos-license
2020-10-29T09:02:35.501Z
2020-07-01T00:00:00.000
225212464
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://futureenergysp.com/index.php/tre/article/download/122/pdf", "pdf_hash": "ce16f86bff212df55f13d0a9c2bb77ff664b7f23", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:100", "s2fieldsofstudy": [ "Engineering", "Environmental Science", "Computer Science" ], "sha1": "a62e8f30ee7ff41379e61d23a74158acca60477a", "year": 2020 }
pes2o/s2orc
Management Information Systems and Data Science in the Smart Grid – Inner Class Area Capacity Distribution of the iSHM Class Maps of Overhead Low-Voltage Broadband over Power Lines Topologies On the basis of the initial Statistical Hybrid Model (iSHM), the iSHM class maps, which are 2D contour plots and may graphically classify the real and virtual OV LV BPL topologies into five class areas, are upgraded in this paper by exploiting the third dimension of the capacity so that the upgraded class maps can provide additional information concerning the inner class area capacity distribution. The comprehension of the behavior of the inner class area capacity distribution is critical in order to deeper understand the extent and the position of iSHM class map footprints when various operation conditions of OV LV BPL topologies occur. Two inner class area capacity distribution rule of thumbs that deal with the OV LV BPL topology classification and capacity estimation are proposed thus supporting the management information system of OV LV BPL networks. Introduction Broadband over Power Lines (BPL) technology can be considered to be one among the communications solutions, such as Radio Frequency (RF) mesh, modified Long Term Evolution (LTE), Code Division Multiple Access (CDMA) at sub GHz bands, dedicated fiber along high voltage lines and 5G communications, that is going to help towards the transformation of the existing power grid into an advanced IP-based communications network enhanced with a plethora of broadband smart grid applications [1], [2]. Since BPL technology exploits the available power grid infrastructure (i.e., Multiconductor Transmission Line (MTL) configurations and related power devices), which is anyway designed to deliver power rather to support communications, high and frequency-selective channel attenuation remains as one of the critical deficiencies of the BPL signal propagation and transmission. Among the different categories of BPL channel models, a great number of statistical BPL channel models, has been proposed in a variety of BPL technology application fields [3]- [14]. SHM that has been recently proposed in [7]- [9] is based on the Deterministic Hybrid Model (DHM), which has been exhaustively tested in transmission and distribution BPL networks [15]- [17], and is hereafter applied as the required SHM system input procedure. Actually, initial Statistical Hybrid Model (iSHM), which is one of the two supported versions of SHM and is employed in this paper, offers simulation results that are treated as the statistically processed DHM numerical results through a set of appropriate Channel Attenuation Statistical Distributions (CASDs); say, Gaussian, Lognormal, Wald, Weibull and Gumbel CASDs. Until now, the impact of a variety of parameters on the iSHM simulation results has been investigated so far such as the topology length, the interconnections between branches / main lines, branch lengths, distances between branches, branch terminations and channel attenuation measurement differences between the theoretical and practical results due to the real operation conditions [8], [18], [19]. Apart from the impact of the aforementioned intrinsic parameters, critical events during the operation of power grids, such as branch line faults and hook-style energy thefts, can be detected even if real operation conditions occur by exploiting the class maps footprints of iSHM [20], [21]; here, it should be reminded that a class map is a 2D contour plot that: (i) graphically classifies real and virtual BPL topologies in terms of their CASD Maximum Likelihood Estimator (MLE) parameter pairs and capacity; (ii) illustrates the borders between the BPL topology classes; and (iii) corresponds each CASD MLE parameter pair to its BPL topology subclass average capacity for given power grid type, CASD, coupling scheme, Injected Power Spectral Density Limits (IPSD) limits and noise Power Spectral Density (PSD) levels; while class map footprints are the graphical correspondence of CASD MLE parameter pair with the capacity that are represented on the class maps and may assess the impact of the intrinsic parameter change or the existence of critical events during the power grid operations. As the OV LV BPL topologies are examined in this paper, when changes of intrinsic parameters or the aforementioned critical events occur the respective CASD MLE parameters of the modified OV LV BPL topologies tend to change their iSHM footprint locations on the class maps following patterns of the same capacity behavior as presented in [20], [22]. In this paper, the inner class area capacity distribution of the iSHM class maps of OV LV BPL topologies is first investigated while the differences between the capacity of the aforementioned modified OV LV BPL topologies and the respective BPL topology subclass average capacities, which are used in class maps, for given CASD MLE parameter pairs is computed. On the basis of the findings of the inner class area capacity distribution of the iSHM class maps of OV LV BPL topologies and by exploiting the data science, the management information system of the BPL networks is further enhanced with two newly presented rules of thumb that allow: (i) the capacity assessment and the classification of OV LV BPL topologies by studying their CASD MLEs parameter pairs; (ii) the stability of the power grid; and (iii) the surveillance and monitoring of the power grid in order to identify possible critical events. The rest of this short paper, which may act as a companion paper of [18], [20], [22], [23], is organized as follows: Section 2 briefly presents the theory concerning the iSHM class maps and iSHM class map footprints of OV LV BPL topologies. In Section 3, the simulation results regarding the inner class area capacity distribution are demonstrated as well as and the capacity differences between the modified OV LV BPL topologies and the respective BPL topology subclass average capacities of iSHM class maps. The two rules of thumb concerning the monitoring and surveillance of the OV LV BPL networks are here presented. iSHM Class Maps and iSHM Class Map Footprints Prior to numerically investigate the inner class area capacity distribution in iSHM class maps, brief details concerning the definition of iSHM class maps, the iSHM class map footprints of OV LV BPL topologies and the need for this capacity distribution study of class maps are here given. Note that the values of the required operation parameters for the interconnection and the fine application of DHM, iSHM, the iSHM class maps and the iSHM class map footprints of OV LV BPL topologies are reported in [22], [23]. iSHM Class Maps Already been mentioned, SHM consists of two versions; say, iSHM and mSHM. With reference to the BPMN diagrams of iSHM [7], [23], iSHM that is the SHM version of interest in this paper consists of six Phases. The input parameters of iSHM, which coincide with the ones of DHM, are the topological characteristics of the examined real indicative OV LV BPL topologies, the applied coupling schemes, IPSD limits and noise PSD levels while the output of iSHM is the capacity range of each OV LV BPL topology class for given CASD. Also, iSHM supports five CASDs with their corresponding MLEs (i.e., Gaussian, Lognormal, Wald, Weibull and Gumbel CASDs). According to [20], each iSHM CASD exhibits different performance depending on the input parameters but Weibull CASD performs the best performance among the available ones in terms of the performance capacity metrics of the absolute threshold of percentage change and the average absolute percentage change when OV LV BPL topologies are examined. Hence, the CASD approximation accuracy to the real capacity results of [20] mandates the use of Weibull CASD for the further analysis of OV LV BPL topologies in this paper, which is anyway reevaluated for its accuracy during the critical events assumed in this paper. To enrich the five OV LV BPL topology classes, which are straightforward defined after the initial selection of the five respective real indicative OV LV BPL topologies of Table 1 of [23], with other OV LV BPL topologies, the iSHM definition procedure, which has been proposed in [24], statistically defines virtual indicative OV LV BPL topologies by using iSHM CASD MLE parameter pairs and inserts the virtual topologies to the existing five OV LV BPL topology classes. The flowchart of the iSHM definition procedure for OV LV BPL topologies is given in Fig. 3(a) of [23]. Class maps are the output of the iSHM definition procedure where: (i) each CASD MLE parameter pair is corresponded to its OV LV BPL topology subclass average capacity; (ii) CASD MLE parameter pairs can describe real and virtual OV LV BPL topology subclasses; (iii) OV LV BPL topology class areas may be illustrated with respect to the computed capacity borders between the OV LV BPL topology classes; and (iv) with respect to the capacity borders of OV LV BPL topology classes and the Weibull CASD capacity performance, OV LV BPL topology subclasses can be arranged on the class maps by exploiting their CASD MLE parameter pairs. iSHM Footprints on Class Maps iSHM footprints that are added on the class maps as groups of white spots can graphically assess the impact of the intrinsic parameter changes or the existence of critical events during the power grid operation and can help towards the quick identification of the critical events for future actions by the management information system of the smart grid. The theoretical definition of iSHM footprints has been presented in [23] while a variety of applications of iSHM footprints has been presented so far, such as the iSHM footprints of the real OV LV BPL topologies, of the OV LV BPL topologies with a sole branch line fault and of the OV LV BPL topologies with a single hook for energy theft [20]. By studying iSHM footprints of OV LV BPL topologies, it is clear that the iSHM footprint extent, size, white spot positions and white spot group direction with respect to the real indicative OV LV BPL topologies may imply the intrinsic parameter change or the nature of the occurred critical events during the power grid operation. Until now, Weibull CASD is applied during the preparation of iSHM footprints of OV LV BPL topologies due to the best performance among the available supported iSHM CASDs in terms of the performance capacity metrics of the absolute threshold of percentage change and average absolute percentage change when the indicative real OV LV BPL topologies have been assumed [20]. The goal of this paper and of the following Section is to assess the capacity estimation accuracy of iSHM footprints for given Weibull CASD MLEs that come from the OV LV BPL topologies that suffer from intrinsic parameter change or critical events, briefly denoted as modified OV LV BPL topologies, and, hence, justifies the white spot positions and directions of the respective iSHM footprints on the iSHM class maps. The aforementioned assessment can be accomplished either qualitatively by further investigating the inner class area capacity distribution of the iSHM class maps or quantitatively by comparing the capacity of the modified OV LV BPL topologies with the average capacity of the OV LV BPL topology subclass of iSHM class maps whose CASD MLE parameter pair is the same with the one of the examined modified OV LV BPL topology. Numerical Results and Discussion In this Section, numerical results that investigate the inner class area capacity distribution of the iSHM class maps of OV LV BPL topologies are first demonstrated so that a clearer image of the relation between Weibull CASD MLE parameter pair and the OV LV BPL topology subclass average capacity can be shown. This study allows the easier matching between the Weibull CASD MLEs of the examined modified OV LV BPL topologies and the capacity as defined by the iSHM class maps. Then, for the cases of: (i) arbitrary real OV LV BPL topologies; (ii) real indicative OV LV BPL topologies with arbitrary branch line faults; and (iii) real indicative OV LV BPL topologies with arbitrary hook-style energy thefts, their capacities are reported and are compared against the average capacity of the OV LV BPL topology subclass of iSHM class maps whose CASD MLE parameter pair is the same with the one of the examined modified OV LV BPL topology. iSHM Class Maps of OV LV BPL Topologies Already been mentioned, iSHM class maps of Weibull CASD are the basis where iSHM footprints of Weibull CASD are applied onto when the modified OV LV BPL topologies are going to be examined by the management information system of the smart grid. In accordance with [20], the iSHM class map of OV LV BPL topologies is plotted in Fig. 1(a) with respect to ̂M LE Weibull and ̂M LE Weibull for the default operation settings of [23] when the average capacity of each OV LV BPL topology subclass is considered. Note that as the spacings of the horizontal and vertical axes are concerned, they are assumed to be equal to 50 instead of 10 of [23] so that a clearer image of the iSHM class maps and the following iSHM footprints can be examined. The respective isodynamic capacity chart of Fig. 1(a) is plotted in Fig. 1(b) without considering the borderlines between OV LV BPL topology classes. With reference to Figs. 1(a) and 1(b), several interesting remarks can be pointed out concerning the location of the modified OV LV BPL topologies and the inner class area capacity distribution of the iSHM class maps, namely: • By comparing Fig. 1 Fig. 3(a) of [20], it is evident that the iSHM footprint of OV LV BPL topologies with one branch is clearly constrained in the rural class area while the majority of the cases examined are approximately located at the same isodynamic capacity curve with small right or left deviations. Here it should be reminded that the OV LV BPL topologies with longer branches tend to present increased values of ̂M LE Weibull and ̂M LE Weibull and thus are located to the upper right part of the isodynamic capacity curve while a small right shift from the isodynamic capacity curve is expected since the OV LV BPL topologies with longer branches are characterized by relatively lower capacities in comparison with the real indicative OV LV BPL rural case. The opposite situation holds for the OV LV BPL topologies with one short branch. In the case of OV LV BPL topologies with two branches, similar observations can be made but for the suburban class area and for its isodynamic capacity curves. • As the effect of branch line faults is examined and the potential of the branch line fault detection is discussed, the iSHM footprint of OV LV BPL topologies with one branch line fault is not strictly located at adjacent isodynamic capacity curves as shown by comparing Fig. 1(b) with Fig. 5 of [20]. In fact, the identification of the OV LV BPL topologies with branch line faults of short length becomes easier since their corresponding iSHM footprint violates the class area borders in accordance with [20]. The violation of the class area borders is anyway an easy task for the management information system of the smart grid to detect. The same result is reached by comparing the location of white spots of the iSHM footprint of OV LV BPL topologies with one branch line fault with the isodynamic capacity curves. • As the effect of single hooks for energy theft is examined and their detection potential is studied, the iSHM footprint of OV LV BPL topologies with one hook for energy theft is significantly right shifted from the real indicative OV LV BPL topology of reference. Since the vast majority of the white spots of the iSHM footprint of OV LV BPL topologies with one hook for energy theft is dislocated and approximately located on an isodynamic capacity curve far right, this implies that the detection of energy thefts is far easier by the management information system than the detection of branch line faults. With reference to Fig. 1(b), it is clearly shown that the inner class area capacity distribution of the iSHM class maps remains zonal and solid without islands of capacity changes. The violation of isodynamic capacity curves helps towards the identification of a critical event during the operation of the power grid while the Weibull CASD MLE parameter pair can help towards the exact location of the critical event when the nature of the critical situation is identified by the management information system given the low intensity of the measurement differences. But the aforementioned identification and classification of the critical situation presumes that the capacity of the modified OV LV BPL topology that suffers from the critical event is equal to the capacity of the applied isodynamic capacity curve which is plotted on the basis of the average capacities of the OV LV BPL topology subclass of Weibull CASD iSHM class map. For that reason, the capacity of the modified OV LV BPL topologies is compared with the average capacity of the OV LV BPL topology subclass of Weibull CASD iSHM class maps whose CASD MLE parameter pair is the same with the one of the examined modified OV LV BPL topology. Note that the presentation of the topological characteristics of OV LV BPL topologies of the following Sections in this paper is made on the basis of the scheme of the typical OV LV BPL topology that is illustrated in Fig. 1(b) of [23]. Also, the topological characteristics of the five indicative OV LV BPL topologies, which act as the representative topologies of the respective classes and their main subclasses, are reported in Table 1 of [23] and on that basis the presentation of the topological characteristics of the arbitrary OV LV BPL topologies of the following Sections is given in the following Tables. iSHM Footprints and Capacity Differences of the Real OV LV BPL Topologies for the Default Operation Settings In Sec.2.4 of [20], the footprint of the real OV LV BPL topologies has been illustrated on the iSHM class maps where real OV LV BPL topologies have been retrieved by the Topology Identification Methodology (TIM) database of [25]. As the applied TIM BPL topology database specifications are concerned, they have been reported in [25] for the database preparation. For the further analysis of this Section, 12 arbitrary real OV LV BPL topologies of one branch and 16 arbitrary real OV LV BPL topologies of two branches, which are reported in Table 1 and 2, respectively, are assumed. As the iSHM footprint parameters of the arbitrary real OV LV BPL topologies with one branch are concerned, Weibull CASD MLE parameter pairs (say, ̂M LE Weibull and ̂M LE Weibull ) of each of the 12 arbitrary real OV LV BPL topologies of one branch are reported in Table 1 as well as the capacity of the examined OV LV BPL topology. Also, the average capacity of the OV LV BPL topology subclass of Weibull CASD iSHM class map for given arbitrary real OV LV BPL topology of one branch is Table 2 where the arbitrary OV LV BPL topologies of two branches are grouped to three isodynamic capacity curves. • In accordance with Table 2 of [20], Weibull CASD succeeds in satisfying the strict capacity performance criteria of absolute threshold of percentage change and average absolute percentage change for the five indicative OV LV BPL topologies. In Table 1, it is shown that the capacity of the examined arbitrary OV LV BPL topology with one branch is approximately the same with the average capacity of the respective OV LV BPL topology subclass of the Weibull iSHM class map of Fig. 1(a) (say, capacity differences that range from -1.308Mbps to 0.002Mbps) that entails that Weibull CASD can accurately approximate the capacity of each OV LV BPL topology of one branch given the Weibull CASD MLEs. Same results can be observed in the case of OV LV BPL topologies with two branches where the capacity differences range from -1.098Mbps to 1.798Mbps. Therefore, Weibull CASD can produce successful capacity estimations for any OV LV BPL topology that lies at the rural and suburban class area given the Weibull CASD MLEs. It is evident from the analysis of this Section that a direct correspondence can be assumed between Weibull CASD MLEs of an examined OV LV BPL topology, its position on the class map, its position on the isodynamic capacity curves and its capacity. Combined with the zonal capacity distribution behavior of class maps, it is shown that OV LV BPL topologies with one, two and three and above branches are expected to be located at rural, suburban and urban class areas, respectively. iSHM Footprints and Capacity Differences of the Real OV LV BPL Topologies with One Branch Line Fault for the Default Operation Settings In Sec.2.5 of [20], the footprint of the real OV LV BPL topologies with one branch line fault has been illustrated on the iSHM class maps where the real indicative OV LV BPL urban case A has acted as the reference topology. Actually, Fault and Instability Identification Methodology (FIIM) database of [25] has been exploited so that all the possible real OV LV BPL topologies with one branch line fault can be retrieved by the real indicative OV LV BPL urban case A. For the further analysis of this Section, 6 arbitrary real OV LV BPL topologies of one branch line fault, which are based on the topological characteristics of the real indicative OV LV BPL urban case A, are reported in Table 3. Note that two different branch line fault lengths are assumed per each of the three branch lines of the real indicative urban case A. As the iSHM footprint parameters of the arbitrary real OV LV BPL topologies with one branch line fault are concerned, Weibull CASD MLE parameter pair of each of the 6 arbitrary real OV LV BPL topologies of one branch line fault is reported in Table 3 as well as the capacity of the examined OV LV BPL topology. Similarly to Tables 1 and 2, the average capacity of the OV LV BPL topology subclass of Weibull CASD iSHM class map for given arbitrary real OV LV BPL topology of one branch line fault and the capacity difference between the examined OV LV BPL topology and the average capacity of the OV LV BPL topology subclass of Weibull CASD iSHM class map of the same Weibull CASD MLE parameter pair with the one of the examined OV LV BPL topology are also given. With respect to Figs. 1(a), 1(b) and Fig. 5 of [20], Table 3 may offer valuable information towards the detection of the critical event of branch line faults during the operation of the power grid by exploiting the iSHM footprints of the real OV LV BPL topologies and the inner class area capacity distribution of the iSHM class maps. More specifically: • Similarly to Tables 1 and 2, the capacities of the examined arbitrary OV LV BPL topologies with one branch line fault remain approximately the same with the average capacities of the respective OV LV BPL topology subclasses of the Weibull iSHM class map of Fig. 1(a) (say, capacity differences that range from -3.604Mbps to 1.959Mbps). The capacity differences are small enough in comparison with the achieved capacities of the examined OV LV BPL topologies that again entails that Weibull CASD can accurately approximate the capacity of each OV LV BPL topology of one branch line fault given its Weibull CASD MLEs. • Since a direct correspondence between the Weibull CASD MLE parameter pair and the capacity of an examined OV LV BPL topology with one branch line fault is verified, there is no need for depicting the Weibull CASD MLE parameter pair of the examined OV LV BPL topology on the class map since a simple check of the achieved capacity of the examined OV LV BPL topology by the management information system with the capacity borderlines of a class area can assure if its location lies inside the class area or not. • Since the real indicative OV LV BPL urban case A acts as the basis for the study of single branch line faults of this Section, the capacities of the class area capacity borderlines are equal to 254Mbps and 298Mbps. By examining the Table 3, it is evident that Topology 3.5 presents capacity that is equal to 299.801Mbps and exceeds the upper capacity borderline of the urban case A class area. If a branch line fault is suspected during the operation of the power grid, this exceedance may be an alert for the management information system. Indeed, the capacity borderline violation remains the safest way to detect a potential branch line faults as also explained and depicted in Fig. 5 of [20]. • In accordance with Table 1 of [20] and Fig. 1(a), the real indicative OV LV BPL urban case A is characterized by ̂M LE Weibull , ̂M LE Weibull and capacity that are equal to 13.29, 1.11 and 275Mbps, respectively. As the OV LV BPL topologies of Table 3 are regarded, their ̂M LE Weibull and ̂M LE Weibull range from 9.993 to 14.737 and from 0.796 to 1.272, respectively, while their capacities range from 266.367Mbps to 298.681Mbps. Since branch line fault reforms the existing affected branch line of the OV LV BPL topology to a shorter one, capacity fluctuations can be observed while a movement across the isodynamic capacity curve is expected due to the fluctuations of ̂M LE Weibull and ̂M LE Weibull . As the violation of class area capacity borderlines and of the isodynamic capacity curves is an evidence for the existence of a branch line fault when no measurement differences are assumed, FIIM, which is analyzed in [25], can be then activated so that the branch line fault can securely identified through its supported repertory of faults and instabilities. Anyway, FIIM is a part of the management information system of the smart grid and here its performance is enhanced. As already been identified, the detection of branch line fault of short length becomes an easier task since violation of the capacity borderlines of the corresponding class area is more expected. iSHM Footprints and Capacity Differences of the Real OV LV BPL Topologies with One Hook for Energy Theft for the Default Operation Settings In Sec. 2.6 of [20], the footprint of the real OV LV BPL topologies with a single hook for energy theft has been shown on the iSHM class maps where the real indicative OV LV BPL suburban case has acted as the basis topology. In accordance to [26], Hook Style energy theft DETection (HS-DET) method may generate all the real OV LV BPL topologies with a single hook for energy theft that are based on the OV LV BPL suburban case but for the sake of the analysis of this Section, 9 arbitrary real OV LV BPL topologies of one hook for energy theft, which are based on the topological characteristics of the real indicative OV LV BPL suburban case, are reported in Table 4. As the iSHM footprint parameters of the arbitrary real OV LV BPL topologies with one hook for energy theft are concerned, Weibull CASD MLE parameter pair of each of the 9 arbitrary real OV LV BPL topologies of one hook for energy theft is reported in Table 4 as well as the capacity of the examined OV LV BPL topology. Similarly to Tables 1-3, the average capacity of the OV LV BPL topology subclass of Weibull CASD iSHM class map for given arbitrary real OV LV BPL topology of one hook for energy theft and the capacity difference between the examined OV LV BPL topology and the average capacity of the OV LV BPL topology subclass of Weibull CASD iSHM class map of the same Weibull CASD MLE parameter pair with the one of the examined OV LV BPL topology are also given. With respect to Figs. 1(a), 1(b) and Fig. 7 of [20], Table 4 verifies the convenience to detect the energy theft by the management information system of the smart grid. Since the hook of energy theft may be treated as an additional branch line to the existing ones of the examined OV LV BPL topology, this implies the transition of the examined OV LV BPL topology from the existing topology class to the one more aggravated; say, in the case of this Section, the original OV LV BPL topology is located at the suburban class area while the modified ones after the insertion of hook (i.e., ) are located at the class area of the urban case A. By taking into account the capacity of the modified OV LV BPL topologies, the low capacity differences and the capacity borderlines of the urban case A class area, it is evident that a steep decrease of the capacity that occurs due to the insertion of the hook means a possible energy theft if all other critical events are eliminated. Another iSHM footprint characteristic that has been highlighted in Table 4 is the steep and steady increase of ̂M LE Weibull with respect to ̂M LE Weibull of OV LV BPL suburban case, which is equal to 6.62 as reported in Table 1 of [20]. Also an increase of ̂M LE Weibull can also been observed in modified OV LV BPL topologies but this occurs in the majority of the cases as presented in Fig.7 of [20]. Due to the generalized differentiation of ̂M LE Weibull , ̂M LE Weibull and capacity, the detection of the hook style energy theft remains easier than the one of branch line fault while the application of HS-DET method as analyzed in [26] may allow the location determination of any hook style energy theft in OV LV BPL networks. Weibull iSHM Class Maps and Inner Class Area Capacity Distribution Rules of Thumb As already been mentioned for Figs. 1(a) and 1(b), first, either Weibull iSHM topology class borderlines or isodynamic capacity curves create solid capacity zones without capacity islands on them. Second, it has been proven in Tables 1-4 that the capacity of the examined OV LV BPL topologies remains almost equal to the average capacity of the respective OV LV BPL topology subclasses of Weibull CASD iSHM class map for given Weibull CASD MLE parameter pair. Third, either Weibull iSHM topology class borderlines or isodynamic capacity curves can be satisfactorily approximated by equations that involve ̂M LE Weibull and ̂M LE Weibull thus clearly defining the limits so that (i) the examined OV LV BPL topology can be grouped among the available topology classes through its Weibull CASD MLE parameter pair; and (ii) the capacity of the examined OV LV BPL topology can be estimated through its Weibull CASD MLE parameter pair. For clarity issues, the following approximation analysis is focused: Fig. 1(a) and Weibull iSHM isodynamic capacity curves of Fig. 1(b) may be approximated through quadratic regression equations that are detailed in Tables 5 and 6, respectively. Now, let assume arbitrary OV LV BPL topologies, say, Topologies 1.1, 2.1, 3.1 and 4.1, whose ̂M LE Weibull and ̂M LE Weibull are alredy reported in Tables 1-4, respectively. Table 6 Quadratic Regression Equations of iSHM isodynamic capacity curves of Fig. 1 Table 7 as well as the capacity of each OV LV BPL topology. Similar results with Table 7 are given in Table 8 but for the case of iSHM isodynamic capacity curve quadratic regression equations R6.1-R6. 6. By observing Tables 7 and 8, two rules of thumb that are based on the inner class area capacity distribution are first presented in the following analysis. In order to define these rules of thumb, an evidence that is very useful during the following analysis is the last positive value of quadratic regression equation results per each row (i.e., per each examined OV LV BPL topology) from right to left that is filled with green color in Tables 7 and 8. As for the two rules of thumb that are proposed, the first one has to do with the classification of OV LV BPL topologies while the second one has to with the approximate computation of the OV LV BPL topology capacity, namely: • First rule of thumb concerning the OV LV BPL topology classification: As the inner class area capacity distribution suggests, the class areas are solid and are bounded by borderlines approximated by the equations of Table 5. In accordance with Table 7, the positive values for given OV LV BPL topology imply that the respective quadratic regression equations lie below the Weibull CASD MLEs of the examined OV LV BPL topology. The opposite fact holds when negative values occur. Since quadratic regression equations of Table 7 describe the right borderlines of the class areas and, hence, the respective lower capacity bounds of the class areas, the last positive value per row in Table 7 determines the OV LV BPL topology class of the examined topology. Therefore, Topology 1.1, 2.1, 3.1 and 4.1 are members of the rural, suburban, urban case A and urban case A OV LV BPL topology classes, respectively. The last remark is easily verified by comparing the capacity of the examined topology with the capacities of right borderlines of the OV LV BPL topology classes that are reported in Table 7; e.g., Topology 1.1 capacity is equal to 385.938Mbps that is greater than the lower capacity bound of rural class, which is equal to 341Mbps, and lower than the "LOS" class capacity of 387Mbps, which is presented in Fig. 1(a) and only consists of the "LOS" case. Same observations stand for the Topologies 2.1, 3.1 and 4.1 as well as for the vast majority of the OV LV BPL topologies described by Weibull CASD MLEs. • Second rule of thumb concerning the capacity approximation of OV LV BPL topology: Similarly to the first rule of thumb, isodynamic capacity curves bound compact capacity areas while the equations of the iSHM isodynamic capacity curves are given in Table 6. In accordance with Table 8, the positive values for given OV LV BPL topology imply that the respective quadratic regression equations lie below the Weibull CASD MLEs of the examined OV LV BPL topology while the negative values imply that the respective quadratic regression equations lie above. Similarly to the first rule of thumb, the last positive value per row in Table 8 may approximate the OV LV BPL topology capacity with lower and upper capacity bounds the capacity of the current isodynamic capacity curve of the last positive value and the capacity of the next isodynamic capacity curve, respectively. The last approximation is easily verified by comparing the capacity of the examined topology with the capacities of the isodynamic capacity curves that are reported in Table 8; e.g., Topology 2.1 capacity is equal to 304.988Mbps that is greater than the lower capacity bound defined by the isodynamic capacity curve 6.4 , which is equal to 300Mbps, and lower than the upper capacity bound defined by the isodynamic capacity curve 6.3 , which is equal to 320Mbps. Same observations stand for the Topologies 1.1, 3.1 and 4.1 as well as for the vast majority of the OV LV BPL topologies characterized by Weibull CASD MLEs. Conclusions In this paper, the study of the inner class area capacity distribution of Weibull iSHM class maps has allowed the better understanding of the relation between Weibull CASD MLEs and the capacity of the OV LV BPL topologies. Actually, regardless of the operation conditions of the OV LV power grid (either normal operation or faulty operation), Weibull CASD MLEs permit the capacity estimation of the examined OV LV BPL topologies. By exploiting the solid capacity areas of the Weibull iSHM class maps and iSHM footprints of the OV LV BPL topologies during their fault operation, critical events such as the branch line faults and the energy thefts can be easily detected through the careful consideration of iSHM footprints of the modified OV LV BPL topologies by the management information system of the OV LV BPL networks. Going one step further, two rules of thumb have been proposed that allow the classification and the capacity approximation of the OV LV BPL topologies by exploiting class area borderline and isodynamic capacity curve quadratic regression equations, respectively. CONFLICTS OF INTEREST The author declares that there is no conflict of interests regarding the publication of this paper.
v3-fos-license
2021-09-13T01:15:37.539Z
2021-09-09T00:00:00.000
237485560
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://aclanthology.org/2021.findings-emnlp.341.pdf", "pdf_hash": "739a684580c93d56914361c772b600fbd697f7ae", "pdf_src": "Arxiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:101", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "739a684580c93d56914361c772b600fbd697f7ae", "year": 2021 }
pes2o/s2orc
Graph-Based Decoding for Task Oriented Semantic Parsing The dominant paradigm for semantic parsing in recent years is to formulate parsing as a sequence-to-sequence task, generating predictions with auto-regressive sequence decoders. In this work, we explore an alternative paradigm. We formulate semantic parsing as a dependency parsing task, applying graph-based decoding techniques developed for syntactic parsing. We compare various decoding techniques given the same pre-trained Transformer encoder on the TOP dataset, including settings where training data is limited or contains only partially-annotated examples. We find that our graph-based approach is competitive with sequence decoders on the standard setting, and offers significant improvements in data efficiency and settings where partially-annotated data is available. Introduction Semantic parsing, the task of mapping natural language queries to structured meaning representations, remains an important challenge for applications such as dialog systems. To support compositional utterances in a task oriented dialog setting, Gupta et al. (2018) introduced the Task Oriented Parse (TOP) representation and released a dataset consisting of pairs of natural language queries and associated TOP trees. As illustrated in Figure 1, TOP trees are hierarchically structured representations consisting of intents, slots, and query tokens. We propose a novel formulation of semantic parsing for TOP as a graph-based parsing task, presenting a graph-based parsing model (hereafter, GBP). Our approach is motivated by the success of such approaches in dependency parsing (McDonald et al., 2005;Kiperwasser and Goldberg, 2016;Dozat and Manning, 2017;Kulmizev et al., 2019) and AMR parsing (Zhang et al., 2019). Recently, sequence-to-sequence (seq2seq) models have become a dominant approach to semantic parsing * Work done while on internship at Google. (e.g., Dong and Lapata 2016, Jia and Liang 2016, Wang et al. 2019a), including on TOP (e.g., Rongali et al. 2020;Aghajanyan et al. 2020;Shao et al. 2020). Unlike such approaches that predict outputs auto-regressively, GBP decomposes parse tree scores over parent-child edge scores, predicting all edge scores in parallel. First, we compare GBP with seq2seq and other decoding techniques, within the context of a fixed encoder and pretraining scheme: in this case, BERT-Base (Devlin et al., 2019). This allows us to isolate the role of the decoding method. We compare these models across the standard setting, as well as additional settings where training data is limited, or when fully annotated examples are limited but partially annotated examples are available. We find that GBP outperforms other methods, especially when learning from partial supervision. Second, we compare GBP with seq2seq models that additionally leverage pretrained decoders. We find that GBP remains competitive, and continues to outperform in the partial supervision setting. Task Formulation We present a novel formulation of the TOP semantic parsing task as a graph-based parsing task. Our goal is to predict a TOP tree y given a natural language query x as input. The nodes in y consist of intent and slot symbols from a vocabulary of output symbols V and the tokens in x. However, y cannot be predicted directly by a conventional ROOT UNUSED UNUSED Figure 2: The graph-based model predicts parent assignments across a set of nodes consisting of query tokens, output symbols for intents and slots, and special UNUSED and ROOT symbols. This is the corresponding parse tree for the TOP tree shown in Figure 1. Not all output symbols are drawn; omitted symbols are attached to UNUSED. Intent and slot names are abbreviated. graph-based approach (McDonald et al., 2005) for two reasons. First, given x, we do not know the subset of intent and slot 1 symbols that occur in y. Second, intent and slot symbols can occur more than once in y. 2 To address this, let us consider a parse tree z in a space of valid trees defined as Z(x). The parse tree z can be deterministically mapped to and from y. The parse tree z consists of: (1) the tokens in x, (2) every symbol in V replicated up to a maximum number of occurrences 3 and assigned a corresponding index, and (3) a special UNUSED node in addition to the standard ROOT node. Let N (x) be this set of nodes which all trees in Z(x) consist of. When mapping from y to z, output symbols occurring multiple times are indexed following a pre-order traversal, and any output symbol that does not occur in y is assigned to the UNUSED node in z. For example, Figure 1 illustrates an example TOP tree, y, and Figure 2 illustrates a corresponding parse tree, z. Scoring Model Given that the mapping between y and z is deterministic, our goal is to model p(z | x). We follow a conventional edge-factored graph-based approach (McDonald et al., 2005), decomposing parse tree scores over directed edges between parent and child node pairs (p, c) in z: , 1 Note that one could imagine instead treating slots as edge labels instead of nodes, but as the set is large (36 slots for 25 intents), little advantage would be expected. 2 See Figure 5 in Appendix for an example. 3 The number of repetitions per output symbol is determined from the training data. If a symbol has a maximum of k occurrences in a TOP tree in the training data, it will have k + 2 replications. See Appendix C for more information. where edge scores, φ(p, c, x), are computed similarly to the model of Dozat and Manning (2017): where e p x and e c x are contextualized vector representations of the nodes p and c, respectively, and U and u are a parameter matrix and vector, respectively. 4 Node representations are computed differently for each node type in N (x). Encodings for token nodes are based on the output of a BERT (Devlin et al., 2019) encoder; replicated output symbols are embedded based on their symbol and index; ROOT and UNUSED nodes likewise have a unique embedding. All nodes are then jointly encoded with a Transformer (Vaswani et al., 2017) encoder, which produces the contextualized node representations e x p and e x c which are used in the above equations to produce the factored edge scores. The scoring model is trained using a standard maximum likelihood objective. Chu-Liu-Edmonds Algorithm The Chu-Liu-Edmonds (CLE) algorithm is an optimal algorithm to find a maximum spanning arborescence over a directed graph (Chu and Liu, 1965;Edmonds, 1965). It has commonly been used for parsing dependency trees from edge-factored scoring models (e.g., McDonald et al. 2005;Dozat and Manning 2017). Note that in an arborescence (hereafter tree), each node can have at most one 'parent', or incoming edge. Thus, the algorithm first chooses the highest scoring parent for each node as the initial best parent. It is possible these initial best parents already form a tree; however, it may instead produce a graph with cycles. In that case, CLE recursively breaks the cycles until the optimal tree is found. Note that CLE takes the index of the root of the tree as an input, and begins by deleting all incoming edges to enforce this constraint. Conventionally, in dependency parsing, the root of the tree is the special ROOT node. This algorithm is optimal for dependency parsing; however, our formulation differs due to additional constraints based on how TOP trees are mapped to and from dependency trees. First, by convention, the parent of the UNUSED subtree must be ROOT. Second, the UNUSED subtree must be of depth 2: it cannot have any grandchildren. Finally, as valid TOP trees have only one root, the ROOT node must have only one 'child', or outgoing edge. Unused Node Preprocessing As stated, our UNUSED subtree must only have depth 2 to follow our task formulation. Otherwise, the final tree score will be computed incorrectly when translating to a TOP tree, as the entire UNUSED subtree is effectively discarded. Thus, we first preprocess the UNUSED subtree to ensure depth 2. In practice, simply using the initial best parents will result in UNUSED subtrees with depth 3 or greater about 1% of the time. We resolve such cases by making a decision for each node a whose initial best parent is UNUSED and has children itself. One option is to delete the edge to a from UNUSED, making the next highest scoring edge the new best parent of a. The cost of this action is equal to the difference in scores between the corresponding edges. Alternatively, we can take a similar action on each child of a: delete the edge from a, making the next highest scoring edge the new best parent. The cost of this action is equal to the difference in the corresponding edges summed over every child of a. We iterate over the children of UNUSED that have children, selecting the action with the lower cost, until the constraint is met. Then, we no longer allow further modifications to the UNUSED subtree, effectively deleting it for the remaining stages of the algorithm. Note that this algorithm is not necessarily optimal: the order in which we consider the children of UNUSED can affect the final result. However, we find this approximation to work well in practice. Multiple Root Resolution Our second modification to the CLE algorithm concerns the ROOT node. Valid TOP trees are single-rooted: in our formalism, this means the ROOT node can only have a single child. To enforce this constraint, we want to choose the single child of ROOT that results in the highest scoring tree. We then provide this child's index to the CLE subroutine and delete all edges from ROOT, effectively discarding it. To find the best root, we start with the set of nodes whose initial best parent is the ROOT node. If this set is a singleton, we simply choose that node as the tree's root, providing its index to the CLE subroutine. In about 0.5% of trees, there is more than one node. In that case, we run the CLE algorithm with each node as the given root index, taking the highest-scoring tree. This is still not guaranteed to be optimal: the optimal choice of the root node could have a different initial best parent than ROOT. However, this was not observed in our experiments and trying every node drastically increases the computation. Experiments The TOP dataset consists of trees where every token in the query is attached to either an intent (prefixed with IN:) or slot label (prefixed with SL:). Intents and slot labels can also attach to each other, forming compositional interpretations. We evaluate several models on the standard setup of the TOP dataset. We also devise new setups comparing the abilities of several models to learn from a smaller amount of fully annotated data, both with and without additional partially annotated data. Models are compared on exact match accuracy. Following Rongali et al. (2020);Einolghozati et al. (2018), and Aghajanyan et al. (2020), we filter out queries annotated as unsupported 5 , leaving 28414 train examples and 8241 test examples. Standard Supervision We use standard supervision to refer to settings where all training examples contain a complete output tree. We also evaluate data efficiency, by comparing the performance when training data is limited to 1% or 10% of the original dataset. Partial Supervision We use partial supervision to refer to settings where we discard labels for certain nodes in the output trees of some or all training examples. Such partially annotated examples could arise in practice; for instance, when there is annotator disagree- ment on part of the output tree, or when changes to the set of possible slots or intents render parts of previously annotated trees obsolete. As semantic parsing datasets normally require expert annotators, extending fully annotated examples with additional partial annotation can be an effective strategy. For instance, Choi et al. (2015) scaled their semantic parsing model with partial ontologies, and Das and Smith (2011) used additional semi-supervised data for their frame semantic parsing model. We consider two types of partially annotated output trees described below. Terminal-only Supervision For this type of partial supervision, only the labels of each token (i.e., terminal) are preserved. See Figure 3 for an example. The label for each individual token is known, but the full set of intents and slots, and their tree structure, is unknown. This is similar to utilizing span labels that do not have full trees available. Nonterminal-only Supervision For this type of partial supervision, token (i.e., terminal) labels are discarded. This is equivalent to deleting all of the token nodes from the tree. See Figure 4 for an example. This provides the opposite type of supervision as the terminal-only supervision case. The complete set of intents and slots and their tree structure is known, but their anchoring to the query text is unknown. For instance, if a query is known to have the same parse as a fully annotated query, its grounding may still be unknown. Comparisons with Fixed Encoder We first compare GBP with other methods using the same pre-trained encoder (BERT-Base; Devlin et al. 2019). We compare with a standard sequence decoder (a pointer-generator network; Vinyals et al. 2015;See et al. 2017) implemented using a Transformer-based (Vaswani et al., 2017) decoder (PTRGEN). We report the previous results from Rongali et al. (2020) and new results from an implementation based on that of Suhr et al. (2020), which provides a slightly stronger baseline. We also compare with the factored span parsing (FSP) approach of Pasupat et al. (2019). Notably, we report new results for FSP using a BERT-Base encoder, which are significantly stronger than previously published results which used GloVe (Pennington et al., 2014) embeddings (85.1% vs.81.8%). Results can be found in Table 1. We evaluate these models across both the standard and partial supervision settings. Notably, GBP can incorporate partial supervision in a straightforward way because scores for parse trees are factored over conditionally-independent scores for each edge. Training proceeds as described in Section 3; however, the loss from the edges that are not given by the example is masked. Additional training details can be found in Appendix B. For PtrGen, each type of partial supervision is given a task-specific prefix; details are in Appendix A. Similar to GBP, FSP factors parse scores across local components, but also considers chains of length > 1. Therefore, terminal-only supervision uses only length 1 chains; there is no trivial way to use nonterminalonly supervision without very substantial changes. GBP is the highest-performing of the BERT-base models on the standard setup. Both GBP and FSP show better data efficiency than PtrGen. Only GBP appears to effectively benefit from partially annotated data in our experiments; the other models perform worse when incorporating this data. Comparisons with Pretrained Decoders Recently, sequence-to-sequence models with pretrained decoders, such as BART (Lewis et al., 2019) and T5 (Raffel et al., 2020), have demonstrated strong performance on a variety of tasks. Careful comparisons isolating the effects of model size and pretraining tasks are limited by the availability of pretrained checkpoints for such models. Regardless, we compare GBP (with BERT-Base) directly with such models. On the standard setting for TOP, Aghajanyan et al. (2020) report SOTA performance with BART (87.1%), outperforming GBP. We also report new results comparing GBP with T5 on both the standard supervision and partial supervision settings in Table 2. Notably, T5 is able to leverage partiallyannotated examples much more effectively than PTRGEN, which is also a Transformer-based sequence-to-sequence model but does not have a pretrained decoder. While T5 outperforms GBP on the standard setting, GBP outperforms T5 on the data efficiency and partial supervision settings. Related Work The most recent state of the art on TOP has focused on applying new methods of pretraining; (Rongali et al. 2020;Shao et al. 2020;Aghajanyan et al. 2020) all use seq2seq methods, enhanced by better pretraining from BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and BART (Lewis et al., 2020) while using similar model architectures. In this work, we instead investigate the choice of decoder. While the FSP model (Pasupat et al., 2019) similarly uses a factored approach, its approach is more specific to TOP, as its trees must be projective and anchored to the input text. In dependency parsing, the performance of graph-based and transition-based parsing is compared in both Zhang and Clark (2008) and Kulmizev et al. (2019). Graph-based parsing has also been used in AMR parsing (Zhang et al., 2019), which translates sentences into structured graph representations. Similar methods have also been used in semantic role labeling (He et al., 2018), which requires labeling arcs between text spans. This work is the first to adapt graph-based parsing to tree-like task-oriented semantic parses. Conclusions We propose a novel framing of semantic parsing for TOP as a graph-based parsing task. We find that our proposed method is a competitive alternative to the standard paradigm of seq2seq models, especially when fully annotated data is limited and/or partially-annotated data is available. Ethical Considerations We fine-tune all models using 32 Cloud TPU v3 cores 6 . Additional training details are in Appendix A and Appendix B. We reused existing pretrained checkpoints for both BERT and T5, reducing the resources needed to run experiments. Our evaluation focuses on the existing TOP dataset: the details of the collection can be found in Gupta et al. (2018). TOP is an English-only dataset, which limits our ability to claim that our findings generalize across languages. A deployed dialog system has additional ethical considerations related to access, given their potential to make certain computational functions faster, easier, or more hands-free. (Vaswani et al., 2017) with 4 attention heads and 768 dims and a dropout rate of 0.3. We use a hidden size of 1024 for computing edge scores similarly to Dozat and Manning (2017). Cross entropy loss is minimized with the optimizer described in Devlin et al. (2019). For partial supervision experiments, the loss is masked for unsupervised edges. The model is trained over 20000 steps with a learning rate of 0.0001 and 2000 warmup steps. All hyperparameters are chosen based on validation set exact match accuracy performed by a grid search. BERT-base has approximately 110M parameters, and GBSP introduces approximately 13M additional parameters, for a total of approximately 123M parameters. Note that larger versions of BERT did not lead to performance improvements in our experiments. Note a comparison on validation performance can be found in Table 3 (the validation set without unsupported has 4032 examples). All tested values for hyperpameters can be found in Table 4. We estimate approximately 1,000 total training runs during the development cycle. After tuning hyperparameters on the full set, no re-tuning occurred: partial supervision and data efficiency experiments used the same setup. Model training takes approximately 45 minutes. IN:GET_EVENT SL:DATE_TIME this weekend events SL:DATE_TIME Holiday Figure 5: Example TOP tree with two occurrences of SL:DATE_TIME. When mapping from TOP trees to the parse trees predicted by our model, each instance of SL:DATE_TIME is assigned an index based it its preorder position in the TOP tree. C Repeated Nodes See Figure 5 for an example of a TOP tree with repeated nodes. We chose to pad occurrences based on the observation that certain nodes can occur more times than they do in the training set. About half of the nodes only ever occur once. On the validation set, 2 additional replications was the highest value before performance degraded. There are many alternatives to our handling of repeated nodes. For instance, Zhang et al. (2019) had a slightly different task, but we could have adopted their approach of generating the node set auto-regressively. Unfortunately, this would have complicated our method of partial supervision. Another method would be to use a fixed number of duplications: this worked slightly worse in practice, based on validation set performance. Alternatively, the model could have learned a regression, which has been used in non-autoregressive machine translation (e.g., Wang et al. 2019b). We leave trying such an approach to future work. D Full Data Results on the full dataset (including unsupported intents) can be found in
v3-fos-license
2018-04-03T00:56:30.371Z
2010-06-29T00:00:00.000
27436032
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://downloads.hindawi.com/journals/tswj/2010/987371.pdf", "pdf_hash": "c991c389fe3c2b084c7e400f0da2cd67afe7a90f", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:103", "s2fieldsofstudy": [ "Biology" ], "sha1": "41d9efad1ccee0ea2ba9cb12176197aaac29e4ee", "year": 2010 }
pes2o/s2orc
Multifunctional Proteins Bridge Mitosis with Motility and Cancer with Inflammation and Arthritis While most secreted proteins contain defined signal peptides that direct their extracellular transport through the ER-Golgi pathway, nonclassical transport of leaderless peptides/proteins was first described 20 years ago and the mechanisms responsible for unconventional export of such proteins have been thoroughly reviewed. In addition to directed nonclassical secretion, a number of leaderless secreted proteins have been classified as damage-associated molecular-pattern (DAMP) molecules, which are nuclear or cytoplasmic proteins that, under necrotic or apoptotic conditions, are released outside the cell and function as proinflammatory signals. A strong association between persistent release of DAMPs, chronic inflammation, and the hypoxic tumor microenvironment has been proposed. Thus, protein localization and function can change fundamentally from intracellular to extracellular compartments, often under conditions of inflammation, cancer, and arthritis. If we are truly to understand, model, and treat such biological states, it will be important to investigate these multifunctional proteins and their contribution to degenerative diseases. Here, we will focus our discussion on intracellular proteins, both cytoplasmic and nuclear, that play critical extracellular roles. In particular, the multifunctional nature of HMMR/RHAMM and survivin will be highlighted and compared, as these molecules are the subject of extensive biological and therapeutic investigations within hematology and oncology fields. For these and other genes/proteins, we will highlight points of structural and functional intersection during cellular division and differentiation, as well as states associated with cancer, such as tumor-initiation and epithelial-to-mesenchymal transition (EMT). Finally, we will discuss the potential targeting of these proteins for improved therapeutic outcomes within these degenerative disorders. Our goal is to highlight a number of commonalities among these multifunctional proteins for better understanding of their putative roles in tumor initiation, inflammation, arthritis, and cancer. differ both in structure and function [20]. Epithelial cells are stationary, contain cell-cell junctions, and are highly polarized, while mesenchymal cells are motile, lack cell-cell junctions, and are less polarized [20]. During differentiation and, potentially, oncogenesis, these cellular states are not fixed and a cell can show plasticity, transitioning from epithelial to mesenchymal (EMT) or mesenchymal to epithelial (MET). In humans, many of the most common tumors are carcinomas, originating in epithelial cells. As these tumors often spread to distant sites within the body, it is speculated that the tumor may hijack a differentiation program that promotes a transition from epithelial-to-mesenchymal structure, including the loss of cell-cell junctions and the acquisition of motile characteristics. Alternatively, carcinomas may arise from cancer stem cell populations. This population of cells may associate with a specific differentiation state, lacking the specialized and polarized characteristics of terminal differentiated epithelia, and aligning with EMT-like characteristics [10]. It is most likely that one hypothesis for tumorigenesis will not describe all cancers, or all carcinomas; however, it is clear that metastasis is a lethal consequence of tumor progression. Cellular migration is a fundamental characteristic of metastasis and an improved understanding of the genetic, biochemical, and mechanical influences (within, supporting, and surrounding the transformed cells) that determine migration, or EMT, are important goals for oncology. Importantly, most of the extracellular functions for RHAMM are demonstrated during inflammation, wound healing, and/or tumorigenesis. One important exception may be a recent report of RHAMM expression during the development of the nervous system in Xenopus embryogenesis [22]. While functional analyses were not performed, it is provocative to note that RHAMM expression correlated with the expression of CD44 and the hyaluronan synthase 1 (HAS1) in the migrating cranial neural crest [22]. Neural crest formation, along with gastrulation, is perhaps the best-studied EMT process [20]; therefore, gene expression patterns place RHAMM, CD44, and HA together during a critical EMT for normal development. Taken together, these data suggest a pivotal role for extracellular RHAMM-CD44 in determining cell fate during normal developmental processes of EMT, as well as during degenerative disease states, such as cancer, when this program may augment the survival and migration of tumor populations. RHAMM Assembles Microtubules and Regulates Mitosis and Differentiation In addition to its expression during neural crest formation, RHAMM is highly expressed during proliferative processes of embryogenesis, potentially in an HA-independent manner [22]. These findings are consistent with elevated RHAMM expression during tail regeneration in Xenopus tadpoles, a period of intense cell proliferation [23]. Moreover, RHAMM mRNA and protein expression levels are regulated by cell-cycle progression in human cell lines [24,25], while expression of RHAMM mRNA in the adult human is restricted to proliferating tissues, like testis, tonsils, and bone marrow [26]. These expression data strongly suggest that the protein's function relates to cell division and mitosis. Indeed, RHAMM directly binds microtubules through an NH2-terminal domain and interacts with the mitotic spindle [27,28]. While the basic nature of the COOH terminus mediates an ionic interaction with HA, primary structure analysis and deletion constructs identify this region as a centrosome-targeting, basic leucine zipper motif with high homology to the microtubule motor protein KLP2[28]. In both human and Xenopus, RHAMM localizes to the centrosome and mitotic spindle pole through an interaction with targeting protein for XKlp2 (TPX2), while inhibition or overexpression of RHAMM disrupts mitotic spindle assembly [24,29]. Consistently, RHAMM -/fibroblasts contain abnormal mitoses in culture [7]. Finally, recent screens [30,31] have identified key roles for RHAMM in spindle disassembly and mitotic exit. Together, these data support a critical role for RHAMM in the formation of the mitotic apparatus as a microtubule-associated protein. Many of the "hallmarks of cancer", including self-sufficiency in growth signals, insensitivity to antiproliferative signals, and limitless replicative potential, suggest that cancer is a disease of altered cell proliferation [32]. For this reason, an understanding of the mechanisms responsible for mitotic spindle assembly and exit are essential to the study and targeting of cancers. Moreover, microtubule dynamics play an integral role in cellular differentiation and migration. That is, the nucleation of microtubules at cell-cell junctions is essential to epithelial structure [33], while the positioning of the microtubule organizing center, the centrosome, is a determining factor in mesenchymal migration [34]. Thus, structural cues from microtubule organization likely play prominent roles in cancer progression, during both proliferation and migration. Cell division is one of the most-studied cellular processes (175,000 PubMed citations) and the protein regulators of microtubule organization during mitosis have been well described. In general, the formation of the mitotic spindle requires two microtubule-centered processes, microtubule assembly and microtubule capture. Microtubule assembly occurs at centrosomes/spindle poles and proximal to chromosomes, while microtubule capture occurs at kinetochores. Broadly speaking, the aurora kinases regulate these two processes with aurora kinase A (AURKA) promoting microtubule assembly and aurora kinase B (AURKB) promoting microtubule capture [35,36]. RHAMM and TPX2 are nonmotor spindle assembly proteins that regulate microtubule assembly and cross-linking during spindle formation (reviewed in [2,37]). TPX2 is a potent activator of AURKA [38], and RHAMM regulates TPX2 localization[24,29] and influences microtubule assembly. In turn, RHAMM abundance is determined by the activity of the heterodimeric E3 ubiquitin ligase, Breast Cancer, early onset 1 (BRCA1)-BRCA1associated RING domain 1 (BARD1) [39,40]. Thus, RHAMM abundance regulates TPX2 and is regulated by BRCA1-BARD1, as well as the anaphase-promoting complex [31], to influence microtubule nucleation through AURKA. AURKA-BRCA1-RHAMM-TPX2 interactions, which regulate microtubule organization during mitosis, may be conserved during differentiation and migration.. Given the correlation between RHAMM expression, proliferation, and EMT during Xenopus development, RHAMM may regulate microtubule organization during division and differentiation (e.g., EMT or MET). Importantly, cytoskeletal changes regulate both EMT and MET. Apico-basal polarization of epithelial cells (i.e., MET) involves a dramatic reorganization of the microtubule cytoskeleton [41], while cytoskeletal changes are crucial for these cells to leave the epithelium and begin migrating individually during EMT [20]. In particular, the dissolution of adheren junctions is an essential event to disrupt epithelial cell polarity [20]. Microtubule nucleation is perhaps an underappreciated role for adheren junctions in epithelia [33,42] and dissolution of these sites may promote centrosome nucleation of microtubules, which highlights a fundamental similarity between EMT and mitotic spindle assembly. AURKA activation is vital for the reorganization of the microtubule cytoskeleton in polarized, nonmitotic epithelia [43]. AURKA promotes interphase microtubule nucleation through the phosphorylation of BRCA1, down-regulating BRCA1-BARD1 ubiquitination of centrosome proteins, and increasing the nucleation potential of the centrosome [44,45]. In human carriers of BRCA1 mutations, luminal differentiation of mammary progenitors is altered [46], supporting the hypothesis that BRCA1 determines stem/progenitor cell fate [47]. Indeed, site-directed mutagenesis of the AURKA phosphorylation site in BRCA1 is sufficient to alter the differentiation potential of embryonic stem cells [48]; together, these data highlight a fundamental role for AURKA and BRCA1 during differentiation. Given that RHAMM regulates mitotic organization of microtubules through AURKA-BRCA1 [39,40], intracellular regulation of cytoskeletal elements by AURKA, BRCA1, and RHAMM may play vital roles in cellular fates, such as luminal differentiation and EMT. Consistently, disruption of AURKA [49], BRCA1 [50], or RHAMM [51] modifies neurite extension, an alternate differentiation program dependent on microtubule nucleation. Thus, RHAMM is an example of an intracellular cytoskeletal protein that regulates differentiation and division, and, upon release or secretion, alters cellular responses to extracellular cues, such as HA. In this way, RHAMM secretion may be an active process of tumor cells that promotes "tumor-initiating" properties, such as EMT and migration, and/or RHAMM release may be an indicator of degenerative disease, such as cancer, that modulates immune responses. Secreted, Cytoskeletal Proteins in Degenerative Disease RHAMM is not alone in its function as an intracellular, cytoskeletal protein that, upon release, regulates extracellular processes (Fig. 1). Glucose-regulated protein (GRP)-78 and GRP-75 are molecular chaperone proteins that interact with RHAMM at microtubules [52] and, like RHAMM, are molecular components of the Xenopus Meiotic Microtubule-Associated Interactome [53]. GRP-78, aka "immunoglobulin heavy chain-binding protein" or Bip, was identified from cells starved of glucose and is a heat-shock protein 70 family member that relocalizes to the cell surface in response to ER stress [54]. Similar to RHAMM, the roles for extracellular GRP-78 are expanding to include the regulation of cancer cell susceptibility to drugs [55], and the regulation of ERK and PI3K signaling pathways through a coreceptor [56]. Moreover, a strong association has been drawn between GRP-78 secretion, adipocyte and osteoblast differentiation [57]. Finally, the anti-inflammatory and immune-modulatory properties of extracellular GRP-75 have been reviewed [58] making this molecule, like RHAMM, an emerging therapeutic target for the treatment of arthritis [59]. Moreover, the 14-3-3 proteins are acidic, coiled-coil, scaffold proteins that interact with a variety of partners and influence signaling pathways, cell cycle checkpoints, and DNA damage responses (reviewed in [60,61]). 14-3-3 proteins localize to centrosomes and mitotic microtubules [62], and14-3-3 sigma, aka stratifin or interferon regulatory factor 6 (Irf6), plays a vital role in the prevention of mitotic catastrophe after DNA damage [63,64]. Stratifin, an antifibrogenic factor found in conditioned media from keratinocytes [64,65,66,67], is a key determinant of the proliferation-differentiation switch in keratinocytes [68]. Therefore, RHAMM, GRP-78/Bip, and stratifin represent cytoskeletal proteins with vital roles in division and differentiation; under conditions wherein the balance of differentiation and division is disrupted (i.e., cancer), these proteins are secreted or released into the extracellular compartments and regulate key aspects of cellular migration, tumor initiation, inflammation, and fibrosis. Consequently, these multifunctional proteins may be novel targets within degenerative disease (Table 1). SURVIVIN: DIVISION, DEATH, AND ARTHRITIS In addition to cytoskeletal proteins, release or secretion of nuclear proteins has also been implicated in inflammation, cancer, and arthritis [69]. Survivin, a member of the inhibitor of apoptosis (IAP) gene family, is a small protein with a COOH-terminal, coiled-coil structure that localizes to multiple intracellular compartments, including nuclei, mitochondria, and mitotic centrosomes and microtubules (reviewed in [70]). In addition to its cell-protective roles, survivin regulates microtubule dynamics and is an essential mitotic gene [71]. Like RHAMM[24,28,29], knockdown of survivin induces mitotic defects [72] with both molecules influencing ran-mediated spindle assembly through TPX2 [39,73]. Further to microtubule assembly, survivin plays a key role in microtubule capture at kinetochores. Survivin regulates the localization of the inner centromere protein (INCENP) [72,74,75], a key activator of AURKB, while RHAMM regulates the localization of TPX2, a key activator of AURKA. Just as RHAMM ubiquitination by BRCA1-BARD1 regulates spindle assembly [39], ubiquitination/ deubiquitination of survivin by hFAM regulates chromosome alignment and segregation [76]. Therefore, these microtubule regulators are important determinants of correct cell division. The microtubule-associated effects(s), if any, of survivin during cellular differentiation are unknown. However, a number of interesting parallels between survivin and RHAMM suggest a role for survivin in epithelial differentiation, EMT, and neurite outgrowth. In the human, survivin expression, like RHAMM, is restricted to tissues with proliferative potential, like the thymus and testis [77]. In the mouse embryo, survivin expression is nearly ubiquitous at embryonic day (E) 11.5, but restricted at E15 to -21 to the distal bronchiolar epithelium of the lung and neural crest-derived cells [78]. These expression data are suggestive of a role in proliferation, death, and EMT. Consistently, endothelial-specific loss of survivin results in embryonic lethality due to a lack of normal EMT and neural tube closure defects [79]. Thus, RHAMM and survivin function may converge during embryonic EMT. In the adult rodent, both molecules may contribute to neurite outgrowth. In PC12 cells, neurite outgrowth is suppressed by HA signaling through RHAMM [80] and by overexpression of NAIP-2, a structurally related protein to survivin in rodents [81]. It is most likely that survivin regulates cellular differentiation and death through intracellular functions; however, recent work in models of cancer and inflammation suggest additional, extracellular roles for survivin, similar to RHAMM. We speculate that an elevated mitotic rate within cancer may lead to surface display of RHAMM and survivin. If so, expression of these genes/proteins may be correlated across cancer sites. Indeed, in the Bittner multicancer dataset, including analysis of over 19,000 genes within 213 cancers (1911 samples), the expression of HMMR/RHAMM was most strongly correlated with BIRC5/survivin, along with 14 other genes, including TPX2 (Fig. 2)[26]. These results were confirmed in the Janoueix-Lerosey Brain dataset, over 19,000 genes within 64 samples (Fig. 2) [26,82]. Functional similarities between coexpressed genes revealed a strong bias towards mitotic genes/proteins with associations to microtubules and aurora kinase activity (Fig. 2). However, many coexpressed genes have also been implicated in tumor outcome and metastasis. Survivin, for example, regulates the migratory capacity of invasive breast cancer cell lines and its overexpression is sufficient to increase metastatic potential through up-regulation of fibronectin [83]. It is yet unclear whether extracellular survivin, like RHAMM, regulates tumor cell migration and survival or inflammation to the tumor site. Like RHAMM and stratifin, survivin has been identified in the conditioned media of tumor cell lines [84]. Extracellular survivin protected cancer cells from apoptosis and genotoxic stress, and promoted migratory behavior [84]. Indeed, survivin may promote cell signaling cascades as part of its protective and migratory effects. In leukocytes, extracellular survivin induces integrin expression through p38 MAPK and PI3K signaling [85]; these modifications are strikingly similar to the effects of extracellular RHAMM on beta-integrins in thymocytes [86] and the effect of GRP-78/Bip on Cripto signaling in cancer cells [56]. Another striking parallel between these intracellular proteins is their association and/or detection with inflammatory disorders, such as arthritis. and Janoueix-Lerosey Brain [82] datasets. Coexpressed genes/proteins associate with microtubules and interact with aurora kinases. Additionally, expression of these genes/proteins associate with cellular migration, EMT, and adverse tumor outcomes, such as metastasis (Mets). Oncomine™ (Compendia Bioscience, Ann Arbor, MI) was used for analysis and visualization. Intracellular Proteins, Inflammation, and Arthritis: Autoantibodies Point the Way Thus far, we have reviewed evidence for secretion of intracellular regulators of mitosis and differentiation in the determination of cellular migration, tumorigenesis, and inflammation. The extracellular secretion of these molecules may be restricted to degenerative states, such as cancer and arthritis, and key developmental processes during embryogenesis. Consistently, extracellular RHAMM, GRP-78/Bip, 14-3-3, and survivin are associated with inflammatory disorders, such as arthritis. If so, the secretion of these molecules may illicit immune responses that not only demarcate degenerative disease, but also provide new therapeutic targets. The presence and functions of these multifunctional proteins have been examined both in diseased human tissues as well as in animal models of disease. For example, RHAMM, as detected by immunohistochemistry and quantitative immunoblot, is significantly elevated in knee synovial tissue of patients with advanced osteoarthritis (OA) compared to those without [87]. These results are consistent with animal model studies demonstrating an isoform-specific role for RHAMM in collagen-induced arthritis (CIA)[6]. Examination of synovial fluid (SF) of patients with inflamed joints revealed the presence of stratifin, along with 14-3-3 eta, which correlated to the levels of defined biomarkers for rheumatoid arthritis (RA) [88]. Extracellular survivin has also been detected in the SF of patients with RA; survivin levels were significantly higher in RA than OA and were higher still in patients with erosive RA compared to nonerosive RA [89]. It is postulated that extracellular survivin may be produced by rheumatoid fibroblast-like synoviocytes and induce apoptotic resistance within these cells, provoking a positive feedback on their proliferation [89]. Not only has extracellular survivin been detected in the SF of patients with RA, and associated with destructive disease, but antibodies to survivin relate to nonerosive RA [90]. Therefore, many multifunctional extracellular molecules impact inflammation and arthritis. Some of these molecules, like Bip, are anti-inflammatory, while others, like survivin and RHAMM, play proinflammatory roles. The modulation of these dynamics may dramatically alter disease. Finally, the presence of these molecules can be detected indirectly by the examination of humoral responses, and may present therapeutic options in arthritis and cancer. In addition to RHAMM, autoantibodies against survivin (reviewed by [91]) and 14-3-3 theta [92] have been detected in various types of cancer. Moreover, studies have shown an association of humoral immune response with tumor progression and prognosis (reviewed by [91]). Although autoantibodies have appealing features as biomarkers (such as high specificity, simple detection techniques, and persistent presence in serum [93]), they are limited by moderate sensitivity. Only 10-20% of patients with survivin overexpression will develop survivin antibodies (reviewed by [91]). An effective way to overcome this disadvantage lies in the simultaneous assessment of a panel of autoantibodies for each type of tumor [94]. Like survivin, RHAMM has been identified by SEREX as a tumor-associated antigen in a panel of tumors [95,96]. Given the strong correlation between survivin/RHAMM in the states of inflammation, arthritis, and cancer, they could be good candidates for the panel of biomarkers. The utility of these humoral responses as markers for tissue damage and/or degenerative disease must be examined. Finally, emerging cell-based strategies to target these molecules in cancer have shown promising preclinical and clinical results. For example, peptide vaccination with a RHAMM-derived, highly immunogenic peptide, termed R3, has proven safe and effective at generating CD8+ cytotoxic cellular responses and antitumor activity in patients with acute myeloid leukemia, myelodysplastic syndrome, multiple myeloma, and, more recently, chronic lymphocytic leukemia [97,98]. An alternate strategy of vaccination with dendritic cells expressing exogenous RHAMM mRNA has proven effective against murine models of glioma [99]. Similarly, a survivin peptide-based vaccination proved safe in patients with advanced or recurrent colorectal and urothelial cancer [100,101], while survivin-expressing dendritic cells demonstrate antitumor responses in experimental models [102,103]. Provocatively, vaccination of a murine model for neuroblastoma with survivin minigene DNA demonstrated significant antitumor activity [104]. These cell-based therapies may show tumor-specific activities; with this in mind, neuroblastoma may be sensitive to RHAMM-based immunotherapies given the established roles of AURKA [105] and BARD1 [106], two components of the AURKA-BRCA1/BARD1-RHAMM-TPX2 centrosome module, within this refractory disease.
v3-fos-license
2016-05-12T22:15:10.714Z
2016-01-29T00:00:00.000
11833495
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2016.00012/pdf", "pdf_hash": "dc8a1e77cc39c2de6e2db24f33b0e728294e25a8", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:104", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "dc8a1e77cc39c2de6e2db24f33b0e728294e25a8", "year": 2016 }
pes2o/s2orc
The Relative Contribution of NMDARs to Excitatory Postsynaptic Currents is Controlled by Ca2+-Induced Inactivation NMDA receptors (NMDARs) are important mediators of excitatory synaptic transmission and plasticity. A hallmark of these channels is their high permeability to Ca2+. At the same time, they are themselves inhibited by the elevation of intracellular Ca2+ concentration. It is unclear however, whether the Ca2+ entry associated with single NMDAR mediated synaptic events is sufficient to self-inhibit their activation. Such auto-regulation would have important effects on the dynamics of synaptic excitation in several central neuronal networks. Therefore, we studied NMDAR-mediated synaptic currents in mouse hippocampal CA1 pyramidal neurons. Postsynaptic responses to subthreshold Schaffer collateral stimulation depended strongly on the absence or presence of intracellular Ca2+ buffers. Loading of pyramidal cells with exogenous Ca2+ buffers increased the amplitude and decay time of NMDAR mediated EPSCs (EPSPs) and prolonged the time window for action potential (AP) generation. Our data indicate that the Ca2+ influx mediated by unitary synaptic events is sufficient to produce detectable self-inhibition of NMDARs even at a physiological Mg2+ concentration. Therefore, the contribution of NMDARs to synaptic excitation is strongly controlled by both previous synaptic activity as well as by the Ca2+ buffer capacity of postsynaptic neurons. INTRODUCTION In the mammalian central nervous system, excitatory synaptic transmission is mediated by glutamate which co-activates postsynaptic NMDA and AMPA receptors (AMPAR and NMDAR). Fast synaptic currents are mediated by AMPAR channels, whereas NMDAR channels generate slower and longer lasting currents (Forsythe and Westbrook, 1988;Bekkers and Stevens, 1989;Stern et al., 1992;Spruston et al., 1995). NMDARs are highly permeable to Ca 2+ (Ascher and Nowak, 1988) and act as gatekeepers for the Ca 2+ influx into dendritic spines during synaptic activity (Perkel et al., 1993;Malinow et al., 1994). NMDAR function can be modulated by a large number of extracellular agents, including Mg 2+ , glycine, Zn 2+ , polyamines and protons (Collingridge and Lester, 1989;Hollmann and Heinemann, 1994), as well as by the intracellular activities of protein kinases and protein phosphatases (Kotecha and MacDonald, 2003). In addition, numerous studies have shown that an increase in intracellular calcium concentration ([Ca 2+ ] i ) causes a reversible reduction of NMDA-activated currents, irrespective of the source of calcium (Legendre et al., 1993;Vyklicky, 1993;Medina et al., 1994;Kyrozis et al., 1995;Umemiya et al., 2001). The mechanism of Ca 2+ induced inactivation of NMDARs (CIIN) involves calmodulin binding to the C-terminal of the GluN1 subunit and a subsequent reduction in the channel's open probability (Ehlers et al., 1996;Zhang et al., 1998). The functional consequences of CIIN, however, are not well understood. If calcium entry via NMDAR suffices to suppress further activation of the channels, it could mediate an important negative feedback regulation for synaptic excitation under physiological conditions (Rosenmund et al., 1995). This question has not been directly addressed in the past, because most NMDAR-mediated responses were recorded at low extracellular Mg 2+ concentration, with prolonged ( 1 ms) agonist application and without exact estimates of NMDAmediated Ca 2+ influx. However, even at resting membrane potentials ( −70 mV) and in the presence of physiological Mg 2+ concentrations NMDARs still act as the main synaptic source of Ca 2+ entry (Kovalchuk et al., 2000), suggesting the possibility that CIIN shapes the postsynaptic Ca 2+ dynamics and the kinetics of excitatory postsynaptic potentials (EPSPs). In order to explore the role of CIIN in synaptic transmission in neuronal networks, we addressed the following questions: (i) Is NMDAR-mediated Ca 2+ influx during unitary synaptic events sufficient to produce detectable CIIN? (ii) Does CIIN depend on membrane potential due to the voltage-dependent Mg 2+ block of NMDARs? (iii) Does CIIN affect unitary EPSP kinetics and temporal summation of postsynaptic potentials? We report that manipulating the Ca 2+ buffer capacity of hippocampal CA1 pyramidal neurons strongly affects the amplitude of single, subthreshold NMDAR-mediated EPSPs. Moreover, upon high-frequency afferent stimulation, simultaneous relief from Mg 2+ block and CIIN increased the contribution of NMDARs to postsynaptic EPSPs and significantly prolonged their decay time. Our findings suggest that Ca 2+ flux induced during unitary synaptic events is sufficient to produce detectable inhibition of NMDARs. Repetitive activation of excitatory synapses results in a significant prolongation of the integration window for synaptically evoked action potentials (APs). MATERIALS AND METHODS All experimental protocols were performed in accordance with the Kazan Federal University regulations on the use of laboratory animals (ethical approval by the Institutional Animal Care and Use Committee of Kazan State Medical University N9-2013) or by the state government of Baden-Württemberg, Germany. All efforts were made to minimize animal suffering and to reduce the number of animals used. Patch electrodes were pulled from hard borosilicate capillary glass (Sutter Instruments flaming/brown micropipette puller). Hippocampal CA1 pyramidal cells were identified visually using IR-video microscopy. Whole-cell recordings from these neurons were taken at room temperature using a HEKA EPC-7 amplifier (List Elektronik). To evoke synaptic currents, glass electrodes filled with ACSF were placed in the stratum radiatum within 50-100 µm of the body of the recorded neuron. Inhibitory synaptic transmission was blocked during recordings by the addition of 10 µM gabazine to the perfusion ACSF. The intersweep interval was 6 s. In the voltage-clamp experiments the command voltage was corrected for the liquid junction potential. AMPAR and NMDAR mediated currents were pharmacologically dissected using the AMPAR and NMDAR antagonists, CNQX (10 µM) and APV (50 µM), respectively. After recording the total current responses (containing both AMPAR and NMDAR components, 100 sweeps), AMPAR channels were blocked by bath application of CNQX (10 µM) and another 100 sweeps containing only NMDAR responses were recorded. To confirm that the remaining current after CNQX application was NMDAR-mediated; APV (50 µM) was applied at the end of every experiment. AMPA currents were obtained by subtraction of the averaged NMDA response from the averaged total response. AMPA/NMDA ratios were calculated as the peak AMPAR-mediated current amplitudes divided by the peak NMDAR-mediated current amplitudes. During recordings, membrane resistance was monitored, and data from cells in which the membrane resistance varied by >15% were discarded from the analysis. Throughout the article n refers to the number of the experiments in the group. For statistical analysis, the Mann-Whitney test has been used and data are presented as mean ± SD, unless otherwise stated. Ca 2+ Entry During Unitary Subthreshold Synaptic Responses is Sufficient to Trigger CIIN The physiological significance of CIIN depends on two major questions: (i) Is the onset of CIIN fast enough to affect the amplitude of the NMDAR response during a unitary synaptic event? and (ii) Is the concentration of Ca 2+ entering the spine during subthreshold EPSPs/EPSCs sufficient to trigger CIIN? Depending on these parameters, the effect of CIIN can be either nearly instant, modulating all postsynaptic responses, or can be mostly cumulative, being pronounced during repetitive activity at high frequencies. Manipulation of [Ca 2+ ] i by changing the intracellular Ca 2+ buffer capacity provides a powerful tool to discriminate between those two scenarios. Hippocampal CA1 pyramidal neurons have naturally low endogenous buffer expression (Scheuss et al., 2006). To evaluate the kinetics of CIIN, we examined the effect of intracellular buffer loading (10 mM EGTA) into CA1 pyramidal neurons, on the amplitude of Schaffer-collateral evoked NMDAR-mediated EPSCs (nEPSCs). Experiments were carried out in Mg 2+ -free ASCF. Responses were measured either at −70 mV or at +50 mV and were compared to the amplitude of AMPAR-mediated EPSCs (aEPSCs) recorded from the same cell at −70 mV. At −70 mV, the relative amplitudes of nEPSCs recorded in the presence of EGTA were substantially higher compared to those measured with buffer-free pipette solution ( Figure 1A). Accordingly, the AMPA/NMDA ratio recorded with EGTAcontaining intracellular solution was 0.7 ± 0.26 (n = 6) and under EGTA-free conditions it was 2.44 ± 0.55 (n = 7, p = 0.001). Ratios of aEPSCs recorded at −70 mV to nEPSCs acquired at +50 mV were still slightly lower in the presence of the buffer, however the apparent difference was not significant (EGTA-free 0.94 ± 0.28, n = 7); EGTA-containing 0.81 ± 0.24 (n = 6; p = 0.45; Figure 1A). These data show that at −70 mV EGTA loaded into the cell prevents CIIN by buffering incoming Ca 2+ , resulting in increased nEPSC amplitudes compared to those in buffer-free conditions. Whereas at +50 mV, when Ca 2+ entry is negligible, the nEPSC amplitude is practically insensitive to buffer loading due to lack of CIIN. The latter also indicates that buffer loading does not trigger any long lasting voltage-independent change in synaptic NMDAR conductance. To test whether the degree of CIIN can be increased by prolonged subthreshold synaptic stimulation, we measured and compared the amplitude ratios of the second (NMDA2) and fifth (NMDA5) nEPSCs to the first nEPSC (NMDA1), using a five-pulse stimulation protocol (10 Hz) in neurons loaded with buffer-free or EGTA-containing intracellular solutions ( Figure 1B). The averaged NMDA2/NMDA1 ratio was increased slightly, but not significantly, by buffer loading (EGTA-free: 1.23 ± 0.11, n = 6; EGTA-containing: 1.36 ± 0.21, n = 5; p = 0.662). Later responses, however, were clearly enhanced by buffering Ca 2+ : the NMDA5/NMDA1 ratio was 0.9 ± 0.15 in neurons patched with EGTA-free solution (n = 6) and 1.28 ± 0.21 in EGTA-containing neurons (n = 5; p = 0.009). These data indicate that CIIN does alter the NMDAR contribution to unitary responses, with a stronger impact on NMDAR-mediated currents during prolonged repetitive activity. In the Presence of Mg 2+ CIIN Affects NMDAR-Mediated Currents in a Voltage-Independent Manner The next question was whether under physiological conditions, when NMDARs are heavily controlled by extracellular Mg 2+ , CIIN could still influence nEPSCs? The magnitude of the Mg 2+ block at resting membrane potentials is nearly maximal, resulting in a robust reduction of the NMDAR contribution to postsynaptic Ca 2+ influx. On the contrary, at less negative potentials, upon partial relief of the block, enhanced Ca 2+ influx through the channels might still have significant consequences on NMDAR function. In other words, the nonlinearity of NMDAR-mediated Ca 2+ entry due to Mg 2+ block could give rise to a voltage dependence of CIIN. To test this hypothesis, we investigated the effect of intracellular buffer loading on current voltage (IV) relationships of evoked synaptic NMDARmediated currents. Figure 2A shows averaged evoked nEPSCs measured at −70, −35, 0, 35 and 50 mV with buffer-free, EGTA-containing (10 mM) and BAPTA-containing (1 mM) intracellular solutions. All responses were normalized to the mean EPSC amplitude obtained at 50 mV, where reduced Ca 2+ entry through the channels should have a minor effect on the amplitudes of nEPSCs. Normalized nEPSCs measured at 35 mV were nearly identical, irrespective of the intracellular buffer content. However, responses recorded at −35 and −70 mV with EGTA (nEPSC −35 : −0.32 ± 0.07; nEPSC −70 : −0.11 ± 0.03; n = 6) or BAPTA (nEPSC −35 : −0.34 ± 0.09; nEPSC −70 : −0.12 ± 0.03; n = 5) in the patch pipettes were more than twofold larger than those collected with bufferfree intracellular solution (nEPSC −35 : −0.17 ± 0.04; nEPSC −70 : −0.05 ± 0.01; n = 7; Figure 2B). The enhancement of NMDAmediated EPSCs in the presence of intracellular Ca 2+ buffers was highly significant at both negative recording potentials (−35 mV, p 0.001; −70 mV, p < 0.001 one way ANOVA). However, after normalization to the values obtained at −70 mV, the normalized nEPSC −35 amplitudes in all three groups were not different (−3.71 ± 0.5, −2.72 ± 0.6 and −3.2 ± 0.9 for buffer-free, EGTA-and BAPTA-containing solutions respectively; p > 0.05 one way ANOVA; Figure 2C), indicating that the magnitude of CIIN did not depend on the strength of the Mg 2+ block. Thus, CIIN can drastically reduce nEPSC amplitudes even in the presence of Mg 2+ at physiological concentrations. The Relative Contribution of NMDAR to the Postsynaptic Responses is Strongly Controlled by CIIN To further substantiate the modulatory role of CIIN under physiological conditions and estimate its impact on the amplitude of compound EPSCs we compared AMPAR-and NMDAR-mediated responses measured at −70 and −35 mV in neurons patched with buffer-free and buffer-containing pipette solutions. At both holding potentials, the relative amplitudes of nEPSCs recorded from the cells dialyzed with buffer-free intracellular solution were significantly smaller compared to those measured with EGTA (10 mM) or BAPTA (1 mM), as indicated by much smaller AMPA/NMDA ratios (Figures 3A,B). To evaluate the effect of CIIN on the relative NMDAR contribution to the compound response, we reconstructed weighted synaptic IV-curves of aEPSCs and nEPSCs recorded with buffer-free and buffer-containing solutions. Both AMPAR- and NMDAR-mediated responses were normalized to the averaged aEPSC amplitude measured at −70 mV ( Figure 3C). As expected aEPSC amplitudes did not depend on the intracellular buffer content and the IVs of the aEPCSs were nearly linear. However, the weight of the NMDAR contribution to the compound EPSCs strongly depended on the presence of Ca 2+ buffers. In the cells dialyzed with buffer-free solution at −70 mV the contribution of the nEPSC was 6 ± 1% of the compound response. The impact of NMDARs increased at −35 mV (24 ± 8%) but was still significantly lower than that of AMPARs (n = 8; p < 0.001; Figure 3D). In neurons loaded with buffers, the contribution of NMDAR channels was strongly enhanced at −70 mV (10 mM EGTA: 18 ± 6%, n = 6; 1 mM BAPTA 21 ± 4%, n = 5), moreover, at −35 mV weighted nEPSC amplitudes were significantly larger than aEPSCs (59 ± 10% and 56 ± 5%, p < 0.05 for EGTA-and BAPTA-containing solutions respectively). Thus relief from CIIN gave rise to a three to fourfold enhancement in NMDAR contribution to excitatory postsynaptic responses. CIIN Moderates the EPSP Decay Time and Action Potential Firing Window The increased NMDAR contribution in CIIN-free conditions may substantially prolong EPSP duration and as a result, increase the time window for AP generation. To test these possibilities, we examined the consequences of intracellular BAPTA (1 mM) loading on the EPSP decay time constant and neuronal firing properties. Experiments were carried out in current clamp mode, and postsynaptic CA1 pyramidal cells were kept at resting membrane potential (around −65 mV). To reach different levels of postsynaptic depolarization, Schaffer collateral inputs were stimulated with 50, 20 and 10 Hz trains of three stimuli. The stimulation intensity was the minimal and sufficient intensity to trigger, with 40-60% probability, an AP in response to the 3rd stimulus in 50 Hz trains. Reduction of the stimulation frequency to 20 and 10 Hz decreased both contribution of the EPSPs temporal summation and pairedpulse facilitation to the peak depolarization reached by the 3rd EPSP. After recording 50-75 sweeps at each stimulation frequency, NMDARs were blocked by bath application of the receptor antagonist APV (50 µM) and additional 50 responses were collected at 50, 20 and 10 Hz, respectively. To estimate the contribution of NMDARs to the EPSP duration we compared the decay time constants (tau) of the averaged 3rd EPSPs before and after APV application. Sweeps with APs (occurring at 50 Hz stimulation) were excluded from EPSP decay analysis. In cells patched with buffer-free intracellular solution APV application caused a small, but non-significant acceleration of EPSP decay. The change in the EPSP time constant did not depend on stimulation frequency, the averaged tau values were (in milliseconds) 52 ± 12 vs. 38.9 ± 10 (50 Hz), 52.7 ± 17.5 vs. 37.6 ± 12.2 (20 Hz) and 51.6 ± 9.5 vs. 37.4 ± 12 (10 Hz) before and after drug application respectively (n = 5; p > 0.05; Figure 4A). However, in the BAPTA loaded neurons, activation of synaptic NMDARs had a drastic effect on EPSP decay. At 50 Hz tau values were, in control 74 ± 23, and decreased to 36 ± 14 ms in the presence of APV (n = 5; p = 0.008; Figure 4B). At 20 Hz the acceleration of EPSP decay constants by APV was still significant (58.4 ± 21 vs. 34 ± 12 ms; p = 0.029), while at 10 Hz, block of NMDARs did not cause a meaningful tau reduction (51 ± 9.2 vs. 36.1 ± 6.24 ms; p = 0.2). Note that the time constants measured in the presence of APV were very similar to those in the neurons patched with buffer-free and BAPTA-containing solutions. In line with the prolongation of EPSP decay, AP delays (latency), measured as the interval between the 3rd stimulus artifact and the peak of the AP, were significantly longer in the BAPTA loaded neurons. Figure 4C shows superimposed traces recorded from neurons recorded with buffer-free (black) and BAPTA-containing (red) pipette solutions. Cumulative probability plots (right) show the shift towards longer AP delays in the presence of BAPTA (pooled data from five cells in each group; p < 0.001, Kolmogorov-Smirnov test). Thus, ablation of CIIN and consequent enhancement of the NMDAR contribution to the EPSP duration considerably prolonged the window for AP generation. DISCUSSION It is generally accepted that the main functional role of NMDARs is related to their high permeability to Ca 2+ , which confers on NMDARs a central role in both synaptic plasticity and neuronal survival under physiological conditions and neuronal death under excitotoxic pathological conditions (Paoletti et al., 2013). Functional consequences of NMDARs modulation by various signaling molecules and biochemical cascades under physiological conditions were extensively studied over the last two decades. However the functional role of CIIN remains poorly understood. CIIN as a Mechanism of NMDARs Self-Regulation Under Physiological Conditions The phenomenon of Ca 2+ induced inhibition of NMDARs, has been well documented and explored at the level of intracellular molecular mechanisms (Legendre et al., 1993;Medina et al., 1994;Rosenmund et al., 1995;Ehlers et al., 1996;Wang and Wang, 2012;Paoletti et al., 2013;Bajaj et al., 2014;Yang et al., 2014). However, an important question remained open, namely whether the Ca 2+ entry associated with single NMDAR mediated synaptic events under physiological conditions is sufficient to self-inhibit NMDAR mediated responses. In other words, do the mechanisms governing CIIN operate on the EPSCs time scale (milliseconds)? These aspects of CIIN have not been addressed in previous studies, where CIIN was triggered either by Ca 2+ entry through voltage gated calcium channels or by the prolonged activation of NMDARs (Medina et al., 1994(Medina et al., , 1995(Medina et al., , 1996. These studies also did not strictly quantify the magnitude of NMDAR ''selfinhibition'', especially under physiological conditions. However, Ehlers et al. (1996) We have found that Ca 2+ entering trough NMDAR during a unitary synaptic event can strongly attenuate the amplitude of nEPSCs indicating that the CIIN operates on a rapid time scale of a few milliseconds. This data is in agreement with our previous findings on recombinant channels where in outside out patches, Ca 2+ influx triggered by brief (1 ms) activation of Ca 2+ permeable AMPARs was sufficient to reduce current amplitude of a co-expressed and co-activated Ca 2+ -impermeable NMDAR mutant (Rozov et al., 1997). Moreover, in physiological Mg 2+ concentrations, even around resting membrane potential, where the strength of the Mg 2+ block is nearly maximal, NMDARs can still conduct a sufficient amount of Ca 2+ , to produce a nearly fourfold reduction in the channel's function. Indeed, according to Kovalchuk et al. (2000) under these conditions, subthreshold afferent stimulation gives rise to detectable [Ca 2+ ] i in the spines of CA1 pyramidal cells which is almost exclusively mediated by NMDARs. Together with the fact that Ca 2+ influx through NMDARs is detectable up to at least +20 to +40 mV (Burnashev et al., 1995;Kovalchuk et al., 2000) this suggests that CIIN is operative under physiological conditions. Our data strongly suggest that this elevation in [Ca 2+ ] i is sufficient to trigger CIIN and onset of the inhibition is fast enough to shape individual postsynaptic responses. This finding is in perfect agreement with data on the magnitude of CIIN on the single channel level (Ehlers et al., 1996). Thus, we provide the first evidence that under physiological conditions synaptic NMDARs in cells with low buffer capacity are drastically selfinhibited by NMDAR-mediated Ca 2+ influx. However, in the vast majority of GABAergic interneurons the contribution of NMDARs to the Ca 2+ homeostasis is strongly moderated by an increased endogenous buffer capacity (Freund and Buzsáki, 1996). On the other hand, the presence of endogenous buffers like parvalbumin (PV), calretinin (CR) or calbindin (CB) can effectively reduce the magnitude of CIIN, increasing the contribution of NMDARs to the postsynaptic response. Indeed, synaptic expression of the GluN1 subunit in the hippocampal PV positive interneurons is several fold lower than that in CA1 pyramidal cells (Nyiri et al., 2003), nevertheless, the values of AMPA/NMDA ratios measured in these neurons are very similar (Fuchs et al., 2007). Thus, relief from CIIN by endogenous Ca 2+ buffers changes the main job of NMDARs from the synaptic Ca 2+ supplier to the active postsynaptic contributor to the amplitude and decay of EPSPs. Functionally this can minimize the role of NMDAR channels in the induction of long-term plasticity, but increase their impact on EPSP temporal summation and AP firing profile (Figure 4). It has been found recently that NMDA spikes occurred in multiple dendritic branches of layer 2/3 pyramidal neurons both spontaneously and as a result of sensory input have a major role in enhancing neuronal output in neocortex (Palmer et al., 2014). Contribution of CIIN to the local NMDA spikes, therefore, may also have influence on the number of output APs thus affecting sensory processing and network activity in the cortex. Possible Role of CIIN During Aging and in Neuropsychiatric Disorders Change of NMDARs function have also been implicated in the development of psychotic symptoms in the number of neuropsychiatric illnesses (Lakhan et al., 2013). Along with this the expression of endogenous Ca 2+ buffers changes during aging and some neurological disorders (Bu et al., 2003;Riascos et al., 2011). Thus, the selective vulnerability of the basal forebrain cholinergic neurons to degeneration in Alzheimer's disease has been attributed to the age-related loss of CB from these neurons and a consequent rise in intracellular Ca 2+ (Riascos et al., 2011). Under these conditions CIIN may play an intrinsic compensatory role counteracting intracellular Ca 2+ elevation by reducing NMDAR-mediated Ca 2+ entry. Interestingly, alteration in expression of Ca 2+ buffers often aligns with the alteration of NMDAR function. For instance, agedependent reduction of CR expression in hippocampal granular cells coincides with down regulation of GluN1 immunoreactivity (Gazzaley et al., 1996). Finally, a number of neurological disorders are associated with dysregulation of both NMDAR function and endogenous Ca 2+ buffer synthesis (Heizmann and Braun, 1992;Paoletti et al., 2013;Kook et al., 2014). It has been suggested that PV alterations in schizophrenia can consequently lead to the hypofunction of NMDARs. Schizophrenia is often attributable to NMDAR hypofunction, this might reflect dysregulation of the receptor rather than a deficit in the number of NMDARs (Kantrowitz and Javitt, 2010;Gonzalez-Burgos and Lewis, 2012). In addition, the variation in extracellular Ca 2+ concentration under certain conditions may also attenuate the impact of the CIIN. For example, in vivo measurements of extracellular Ca 2+ concentration in primates during seizures has shown that the Ca 2+ level drops to within the 100 µM range (Pumain et al., 1985). In this case reduced CIIN may increase the window for synaptic integration due to the prolongation of the NMDAR mediated response and lead to neuronal overexcitability. In conclusion, our findings suggest that Ca 2+ induced inactivation of NMDARs operating on the time scale of EPSCs may contribute to the cell-specific fine tuning of excitatory synaptic transmission under normal and pathological conditions. AUTHOR CONTRIBUTIONS Study conception and design: AD, NB and AR. Acquisition of the data: FV, YZ and AR. Analysis and interpretation of the data: FV and AR. Analysis of the data: MM. Drafting of manuscript: AD, NB and AR. Critical revision: NB and AR. ACKNOWLEDGMENTS This work was supported by the program of competitive growth of Kazan University, the subsidy allocated for the state assignment in the sphere of scientific activities, the RFBR grant (14-04-01457), the Bundesministerium für Bildung und Forschung (Bernstein Center for Computational Neurosciences 01GQ1003A) and by the support of the A * MIDEX project (n • A * M-AAP-TR-14-02-140522-13.02-BURNASHEV-HLS) funded by the ''Investissements d Avenir'' French Government program, managed by the French National Research Agency (ANR). We thank David Jappy for useful comments on the manuscript.
v3-fos-license
2019-04-16T19:40:23.687Z
2019-04-03T00:00:00.000
115144439
{ "extfieldsofstudy": [ "Environmental Science", "Computer Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-4292/11/7/803/pdf?version=1554806845", "pdf_hash": "db3d6a230c9f4fe42a8fc3e26ffff125e18c98da", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:105", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "715feca4f0188f6921e81d78b36b1f4a92f5c9ca", "year": 2019 }
pes2o/s2orc
Modeling and quantitative analysis of tropospheric impact on inclined geosynchronous SAR imaging : Geosynchronous orbit synthetic aperture radar (GEO SAR) has a long integration time and a large imaging scene. Therefore, various nonideal factors are easily accumulated, introducing phase errors and degrading the imaging quality. Within the long integration time, tropospheric status changes with time and space, which will result in image shifts and defocusing. According to the characteristics of GEO SAR, the modeling, and quantitative analysis of background troposphere and turbulence are conducted. For background troposphere, the accurate GEO SAR signal spectrum, which takes into account the time-varying troposphere, is deduced. The influences of different rates of changing (ROC) of troposphere with time are analyzed. Finally, results are verified using the refractive index profile data from Fengyun (FY) 3C satellite and the tropospheric zenith delays data from international GNSS service (IGS). The time–space changes of troposphere can cause image shifts which only depend on the satellite beam-foot velocity and the linear ROC of troposphere. The image defocusing is related to the wavelength, resolution requirement, and the second and higher orders of ROC. The short-wavelength GEO SAR systems are more susceptible to impacts, while L-band GEO SAR will be affected when the integration time becomes longer. Tropospheric turbulence will cause the amplitude and phase random fluctuations resulting in image defocusing. However, in the natural environment, radio waves are very weakly affected by turbulence, and the medium-inclined GEO SAR of L- to C-band will not be affected, while the X-band will be influenced slightly. Introduction Troposphere is nondispersive and it affects the amplitude and phase of the radio waves passing through it.It can be divided into two parts: the background troposphere and the turbulence.The background troposphere mainly refers to the slowly changing part due to the large-scale component and corresponds to the input region [1].Radio wave propagation in the troposphere can be characterized by refractive index.When the signal passes through the troposphere, the propagation velocity slows down because the refractive index is greater than 1, which introduces delay errors.Generally, different atmospheric conditions can cause different delay errors.Besides, because the meteorological elements such as atmospheric temperature, pressure, and humidity change with the height and spatial distribution of the refractive index, it is inhomogeneous, causing the propagation path to bend and introducing the bending errors.The tropospheric turbulence refers to the dramatic on ionospheric effects on the L-band system, including ionospheric modeling [36], quantitative analysis [37], compensation algorithms [38], and experimental verification [39].The troposphere causes less influence in L-band and the relevant research is relatively less.However, the impact of the troposphere has also become more serious with the increase of operating frequency. Considering the characteristics of GEO SAR, this paper completes the modeling of the background troposphere using the polynomial expansion of delay errors against azimuth time, and turbulence considering a modified Kolmogorov power law spectrum and phase screen theory.The tropospheric influences are quantitatively analyzed using the simulated and measured data.The accurate GEO SAR signal spectrum considering the time-varying troposphere is derived and the influence of different tropospheric rates of changing with time (ROC) is analyzed, along with the thresholds of tropospheric errors causing image shifts and defocusing.Since the impacts depend not only on the tropospheric status, but also on the GEO SAR system parameters, in this paper, the effects of different GEO SAR orbital configurations are comparatively analyzed.In addition, influences of different integration times and wavelengths are also compared and summarized.These results are verified with the refractive index profile data from Fengyun (FY) 3C satellite and tropospheric zenith delay data from international GNSS service (IGS).As for the tropospheric turbulence, the random amplitude and phase errors caused by turbulence are analyzed based on the theory of phase screen, which is verified by simulation; its influences through the spectrum analysis method, which can build the relationship between the imaging performance indicators (e.g., PSLR and ISLR).Finally, taking the real natural turbulence status into account, we conclude that there is no effect of the turbulence on the medium inclination and high-inclination GEO SAR focusing except the slight defocusing for X-band and even shorter wavelength systems. The structure of this paper is as follows.In Section 2, the phase errors introduced by troposphere are modeled and analyzed.Next, the GEO SAR signal affected by troposphere is proposed and the tropospheric effects are discussed in Section 3. In Section 4, some simulations and measured data of troposphere are used to verify our analysis.Finally, conclusions are drawn in Section 5. The Radio Refractivity Background troposphere will introduce delay errors and bending errors, mainly caused by the change of refractive index with height.Therefore, the influence of the troposphere on radio wave propagation is usually expressed by the refractive index n, which is between 1.00026 and 1.00046.For convenience, the radio refractivity N is used in the paper: N = (n − 1) × 10 6 (1) Radio refractivity N is categorized into dry item N d and wet item N w [40].N can be expressed as where T, P and e w represent the temperature, the pressure, and the humidity, respectively, of the atmosphere at different heights.It is noted that the electrons effects are not considered here and the refractivity is not affected by the ionosphere because we only study the tropospheric effects in this paper. Modeling of Propagation Errors The GEO SAR geometry is shown in Figure 1, where O is the geocentric center and R is the Earth's radius.Target P is at a height of h p from ground, and the curve from GEO SAR passing through the point P and P to target P represents the actual propagation path which passes through the heterogeneous troposphere.The straight line from GEO SAR to P is the straight path of the signal.The point P is the intersection of the GEO SAR signal propagation path and the tropopause.The point P represents any point on the actual path of the signal.θ 1 is the elevation angle at the target P. θ 2 is the elevation angle at any point P on the signal propagation path.h p is the height of P and h up is the height of the tropopause.through the point '' P and ' P to target P represents the actual propagation path which passes through the heterogeneous troposphere.The straight line from GEO SAR to P is the straight path of the signal.The point '' P is the intersection of the GEO SAR signal propagation path and the tropopause.The point ' P represents any point on the actual path of the signal. 1 is the elevation angle at the target P . 2 is the elevation angle at any point ' P on the signal propagation path.' p h is the height of ' P and up h is the height of the tropopause.Ray tracing methods [41,42] can be used to calculate the propagation errors in the troposphere.According to the geometric relation of Figure 1, the actual propagation distance of GEO SAR signal in the troposphere is where p r and '' p r are respectively the distances from P and  P to O , When the GEO SAR signal passes through the troposphere, the error caused by propagation path is: Ray tracing methods [41,42] can be used to calculate the propagation errors in the troposphere.According to the geometric relation of Figure 1, the actual propagation distance of GEO SAR signal in the troposphere is where r p and r p are respectively the distances from P and P to O, r p = h p + R, r p = h up + R. When the GEO SAR signal passes through the troposphere, the error caused by propagation path is: The delay error The bending error (4) where R str represents the path length when the GEO SAR signal propagates straightly in the ideal case.The first term on the right side of the equation represents the delay error caused by the slowing down of the signal propagating velocity and the second term represents the bending error due to the bending of the signal propagation path.It can be seen that the troposphere is a nondispersive medium and the resulting signal delay is independent of wavelength. According to the analysis of massive Global Positioning System (GPS) data, the total tropospheric error can reach meters, but the proportion of bending error is very small, generally no more than 0.1 m [43].Moreover, the change in this curved path contribution as a function of refractive index change due to variation in its wet part is usually negligible.So the bending errors can be treated as being constant. Therefore, in the following analysis, we neglect the effects of bending errors, and only consider the delay error, that is, the elevation angle θ is assumed to remain unchanged in the integration path.At this time, the tropospheric propagation error can be simplified as It can be seen that the delay error introduced by the troposphere in GEO SAR is mainly determined by the integral of the refractivity along the propagation path.At this point, the tropospheric phase error introduced into GEO SAR is which can be calculated from the tropospheric refractivity parameters and the GEO SAR signal propagating geometry.Actually, during the long aperture time and within large observation swath of GEO SAR, the phase error ∆φ trop will change, mainly including (1) during the synthetic aperture time, the propagation path of the signal in the troposphere changes.The length of the propagation path corresponding to different PRT moments is different, introducing different delay errors.(2) Due to the long synthetic aperture time, the tropospheric state may change with time, resulting in time-varying delay errors.(3) Due to the large swath, the refractive index of different propagation paths vary inhomogeneously during the signal passing through the troposphere, causing that the refractive index along different propagation paths are different, resulting in different delay errors. The time-varying, the gradient change of the spatial distribution of troposphere, and the change of the signal propagation path all cause the delay errors.However, all these phase errors will appear as a time-varying pattern in GEO SAR signals from pulse to pulse, but differ from various positions of target.Therefore, in this paper, these three effect types all can be modeled as a series expansion form with slow time, and used to establish the analytical expression and quantitative analysis of the influence of troposphere on imaging and give the threshold under the different parameters of GEO SAR systems.But the effects of the three categories were not separately analyzed and compared.The comparison of these three kinds of influences will be studied in future work.The phase errors can be expressed as where q i is the ith temporal ROC in the error and P denotes the different locations. Power Spectrum Model of Tropospheric Turbulence Tropospheric turbulence will cause the random fluctuations of refractivity.A common model describing turbulence is atmospheric general circulation model (GCM) [44], which is an atmospheric dynamics model that simulates global and large area climate change processes.It is used for weather forecasting, understanding the climate, and forecasting climate change.It may be not suitable for our research because we only study the tropospheric effects on GEO SAR for specific short period of time and relatively small scale, i.e., the synthetic aperture time and length.So in our paper, we choose the power spectrum density (PSD) obeying the power law distribution [45]: where κ = κ x 2 + κ y 2 + κ z 2 (rad/m) is the spatial wave number; κ x = 2π/x, κ y = 2π/y, κ z = 2π/z, l 0 is inner scale, L 0 is outer scale; κ m = 5.91/l 0 , κ 0 = 2π/L 0 , C 2 n m −2/3 is the tropospheric refractivity structure constant which can express the turbulence intensity. Compared to the Kolmogorov spectrum [46] that only applies to the inertial zone and to the Tatarskii spectrum [47] that applies to the inertial and dissipation zone, the Kolmogorov-von Karman spectrum can be used to describe the distribution of the tropospheric turbulence in the entire wave number domain [48].Besides, the modified turbulence power spectrum is proposed which can describe the PSD of turbulence in all wave number domains: where κ l = 3.3/l 0 .However, the turbulence is not static.There exists a movement of turbulence along with wind which will cause the temporal variation on turbulence PSD.Similar to the analysis from Pratsiraola et al. [49], we analyze the time-varying characteristics by considering the drift velocity in the phase screen model.Firstly, starting from the autocorrelation function of refractivity and considering the drift velocity of turbulence, the PSD model affected by the drift velocity is obtained. Assuming that the drift velocity is v d and the tropospheric penetrate point velocity is v p , then the status of the tropospheric irregularity located at x at time t a after the t a time corresponds to the status of the tropospheric irregularity at time t a located at x − v d t a .This relationship can be expressed as refractivity autocorrelation function B n (x, t a ): Therefore, considering the drift velocity, the autocorrelation function can be modified as where β = v e f f /v p is the velocity scale conversion rate and v e f f = v p − v d is relative velocity.According to Wiener-Sinqin's theorem, the refractivity autocorrelation function and its PSD are an Fourier transform pair (i.e., B n (x) ↔ F Φ n (κ)).Therefore, the turbulence PSD considering time-varying can be obtained by scaling the original PSD: It can be seen that β will have effect on amplitude and cutoff frequency of PSD. Turbulence Energy The turbulence level in the troposphere is determined by the turbulence energy which is expressed as the variance of the refractive index σ n 2 .It can be obtained by integrating the refractivity structure spectrum in the inertial region.Taking (9) as an example, σ n 2 can be expressed as where κ 0 = 2π/L 0 and κ m = 2π/l 0 . Here we can define the factor G which represents the integral of the normalized shape of the PSD as The value of G depends on the shape of the selected power spectrum.When different power spectra are chosen, G is different.So the relationship between C 2 n and σ n 2 can be expressed as .3. Multiple Phase Screen Model The amplitude and phase fluctuations caused by the tropospheric turbulence can be simulated using the phase screen theory similarly to the ionospheric scintillation.In this theory, the phase of the signal will be disturbed randomly when it traverses the turbulence (i.e., modeled as thin phase screens).Then the signal propagates in the free space after passing through the phase screen, the disturbing phase makes the wave fronts of the signal interfere with each other, causing the amplitude and phase fluctuations. The ionospheric scintillation can be modeled as a thin screen at a height of ~350 km above the ground.The signal passes through the phase screen and propagates in the free space.The troposphere is different.The troposphere distributes from the ground to the height of ~10 km and there is no part of the signal that propagates in free space.The intensity of tropospheric turbulence (includes vortices caused by convection or wind shear) is related to altitude.It will reach maximum as it approaches the ground.However, if we divide the entire troposphere into multiple phase screens along the vertical height, the thinner the thickness of each subphase screen and the greater the total number of phase screens, the closer to the actual tropospheric distribution. In this paper, we employ a multiphase screen theory to model turbulence as multiple thin screens, integrating the energy of each layer separately onto different thin screens.For simplicity, here we only consider the spatial coherence accumulation of each layer, regardless of the coherence between layers. The disturbing phase introduced by the tropospheric turbulence can be described by the power spectrum Φ tro (κ).Assuming that the thickness of each layer is ∆d, the relationship of the phase power spectrum of the ith layer and the 2D power spectrum of the refractive index can be expressed as [50] where k = 2π/λ.If h 0 is the total thickness of turbulence, we can divide the turbulence into M = h 0 /∆d layers.Equation (16) shows that the turbulence energy of each layer with a thickness of ∆d is integrated together to form a screen and radio waves continue to propagate ∆d in free space after passing it.So, phase screen theory can be used to analyze the impact of turbulence for each layer. In the simulation, firstly, the PSD function Φ i tro (κ) is used to construct the phase random fluctuations: where r m is the zero mean and unit variance Hermitian complex Gaussian random variable. The signal propagating in turbulence can be modeled by parabolic wave function and solved through the multiple phase screen theory [51,52] where u is horizontal space position.Equation ( 18) is obtained using parabolic equation approximation [53].It is noted that the Rytov approximation is also a theory to solve the random turbulence and the amplitude and phase fluctuations can be obtained by Rytov transform, which is seemly similar with (18).But the Rytov approximation can only solve the weak fluctuation problem, which is the limitation compared with the parabolic equation approximation.Therefore, in order to analyze the effects of turbulent strength on GEO SAR imaging in the subsequent content, we choose the parabolic equation approximation for phase screen theory in our paper.Therefore, the total tropospheric transfer function is As the GEO SAR orbit height is ~36,000 km, the heights of the ionosphere and troposphere relative to GEO SAR orbit are not much different.Therefore, the ionospheric transfer function (ITF) of ionospheric scintillation and tropospheric turbulence have similar pattern in GEO SAR cases. GEO SAR Signal Modeling and Tropospheric Effect Analysis According to the above analysis in Section 2, the phase errors introduced by the troposphere to the signal passing through it can be expressed as ∆φ atm (P, t a ) = ∆φ bg (P, t a ) + ∆φ turb (P, t a ) (20) where P is the target in the different position and t a is the azimuth slow time.∆φ bg is the phase error introduced by the background troposphere, as shown in (7); ∆φ turb is the random phase error introduced by tropospheric turbulence, as shown in (17). Because the troposphere is a nondispersive medium, the effects of different frequency components are the same.Taking the background troposphere and turbulence into account, the accurate echo signals of the GEO SAR can be expressed as where t a is the fast time, A r (•) and A a (•) are the envelope function in range and azimuth, respectively, k r is the range frequency modulation rate, λ is wavelength, t a is azimuthal slow time, and I TF (t a ) and φ TF (t a ) are amplitude and phase errors introduced by turbulence, respectively. Theoretical Analysis The troposphere is a nondispersive medium that has the same effect on different frequency signals and it cannot affect the imaging in range.Here we only consider GEO SAR azimuth signal influenced by troposphere.Time-varying tropospheric status and different propagation path's lengths between the different pulses lead to different delay errors which will affect azimuth imaging.These influences are modeled as a series expansion form varying with slow time.The GEO SAR azimuth signal considering background troposphere is analyzed here which can be written as where t a is azimuth slow time, T a is the integration time, f dr is azimuth frequency modulation rate, λ is wavelength, and q i is the ith order rate of change of tropospheric delay error.Through the series inversion theory and the Fourier transform method [54], the derived azimuth signal spectrum is where f a = f dr •t a is azimuth frequency, A exp jπ f 2 a / f dr is the GEO SAR frequency-domain signal that is not affected by the troposphere, and φ ai is the phase error caused by q i .The delay introduced by φ a1 is τ 1 = φ a1 /2π f a = 2•q 1 /λ f dr , so the azimuth image offset can be written as [1,22,55] where v b f is the beam-foot velocity, which is defined as the speed of the radar beam center on the ground.Here, v b f is employed because GEO SAR operates in 'pseudo-spotlight' mode [56] which is caused by the ultrahigh orbit height and Earth rotation.It is noted that the beam-foot velocity and motion velocity are not approximately equal for GEO SAR due to the high-orbital characteristics, which are different from the LEO SAR and airborne SAR.Since λ is inversely proportional to f dr , the azimuthal offset is only related to q 1 when the acquisition geometry of GEO SAR or v b f / f dr is fixed.Therefore, the azimuth shift does not depend on wavelength for GEO SAR. The quadratic phase error of azimuth φ a2 will cause the main lobe widening and sidelobes increasing.Taking the relationship of f a , f dr and T a into account, substituting f a = f dr •t a into φ a2 = 4π•q 2 f a 2 / λ f 2 dr and considering the largest error at edge of the aperture (i.e., t a = T a /2), the maximum second-order phase error of tropospheric delay can be obtained as It can be seen that φ a2m depends on T a , λ and q 2 . The azimuthal third-order phase error φ a3 produces the asymmetric sidelobes and may cause azimuthal defocusing.Similarly, the maximum of φ a3 can be expressed as It can be seen that φ a3m depends on T a , λ and q 3 . Analysis and Discussion on Impacts of Different GEO SAR Configurations From the theoretical analysis in the previous section, the effects of the troposphere are not only related to the changes of the troposphere but also the GEO SAR system parameters (i.e., the configuration of the GEO SAR such as high inclination, low inclination, and near-zero inclination).The image shift caused by the troposphere mainly depends on the linear ROC of the troposphere.The tropospheric linear ROC is related to not only the status of the troposphere but also the propagation path.When the GEO SAR operates at a large squint angle or a large look angle, the ROC of the propagation path increases and the tropospheric impact is more serious.At this time, the linear ROC of the troposphere also increases and the image shift becomes more serious too.Besides, the look angles and the squint angles corresponding to the different targets in the scene are also different, resulting in the different offsets of different pixels in the image and causing image distortion. According to the relationship between integration time, frequency modulation rate, and azimuth resolution, the maximum second-order phase error relating to the azimuth resolution can be obtained by substituting (23).Equation ( 25) can be written as where ρ a is azimuthal resolution and v b f is the beam-foot velocity.When the geometric configuration and q 2 are fixed, the higher the resolution is, the more serious the quadratic phase error will be. When the wavelength and q 2 are fixed and the orbit configuration is unfixed, φ a2m is related to v b f and f dr (and f dr ∝ v 2 b f ).Therefore, the smaller v b f is, the larger the quadratic phase error and the serious defocus will be.Generally, the smaller the orbital inclination is, the smaller v b f will be and the more serious defocus will be.For the same orbital configuration (except the near-zero inclination), the perigee or apogee v b f is the smallest, while the velocity is the largest near the equator.As a result, the levels of deterioration of different orbital positions are not same. When only considering the impact of T a and the fixed size antenna, the shorter the wavelength is, the smaller the integration time is because of T a ∝ λ.Therefore, assuming the geometrical configurations are same, φ a2 is proportional to the integration time. When only considering the impact of λ, (27) can be written as where R is the slant range of zero-Doppler and B a = f dr •T a is the azimuthal bandwidth.When the resolution is fixed, the larger the wavelength is, the more serious the impact will be.This can be also explained that much greater integration time is needed for longer wavelength when the resolution is fixed.The third-order phase error introduced by the troposphere is only related to the integration time.The longer the integration time is, the more serious the impacts will be.However, for different configurations of GEO SAR, the small inclination GEO SAR needs longer integration time to achieve a certain resolution.Therefore, under the same resolution requirement, the smaller the orbital inclination is, the severer the tropospheric effect will be. Tropospheric Turbulence Effect Analysis GEO SAR azimuthal signal affected by turbulence can be written as In order to investigate the degree of the fluctuation, A NV is defined as the normalized amplitude standard deviation of D TF , which describes the amplitude fluctuation strength; P NV is the phase standard deviation of D TF , which describes the phase fluctuation strength: As A NV and P NV become greater, the turbulence will be more serious. Background Troposphere In this section, we will mainly use measured data (refractive index profile data from FY-3C and tropospheric zenith delay data from IGS), which changes slowly with time to complete the analysis of impacts on GEO SAR imaging.For the IGS data, the slant delay can be mapped from troposphere zenith delay data by mapping function to analyze the background tropospheric effects [24].It is verified that these two methods can get the almost same conclusions because the first and second order rate of change (ROC) of tropospheric slant path delay is the same level, as shown in Table 1. Table 1.Tropospheric delay of each order of time ROC in GEO SAR signal. ROC ∆r 0 (m) q 1 (m/s) q 2 (m/s 2 ) q 3 (m/s The atmospheric refractive index profile data was acquired from the FY-3C satellite [57], released by China National Satellite Meteorological Center.The time interval is usually 2 to 5 min, including atmospheric refraction index, data time (year/month/day/hour/minute/second) and satellite position coordinates.The data from 18:28 to 18:40 on May 27, 2015 are selected for analysis and the data interval is 2 min.There are six sets of data in 10 min.Using the ray tracing method, the signal delay corresponding to the six sets of refractive index data is obtained, as shown by the red "+" in Figure 2. We calculate the amount of tropospheric delay per second by Lagrange interpolation [58,59], as shown in Figure 2a.Similarly, we also get the 12 min troposphere zenith path delay data from IGS BJFS site (Beijing) from 18:28 to 18:40 on May 27, 2015 [60], where the data interval is 5 min.The slant path delay can be obtained by mapping function as shown in Figure 2b. Since FY-3C is a LEO satellite, the signal delay here is not fully equivalent to the effects of the troposphere on the GEO SAR signal.Therefore, equivalent treatment [39] based on the GEO SAR and FY-3C satellite orbital parameters is required to calculate the tropospheric delay data on the GEO SAR signal propagation path.Every order ROCs can be obtained as shown in Table 1.We can find the first and second order ROC of FY-3C satellite and IGS are at same level.In the following, we mainly used FY-3C satellite data for more detailed analysis. interval is 2 min.There are six sets of data in 10 min.Using the ray tracing method, the signal delay corresponding to the six sets of refractive index data is obtained, as shown by the red "+" in Figure 2. We calculate the amount of tropospheric delay per second by Lagrange interpolation [58,59], as shown in Figure 2a.Similarly, we also get the 12 min troposphere zenith path delay data from IGS BJFS site (Beijing) from 18:28 to 18:40 on May 27, 2015 [60], where the data interval is 5 min.The slant path delay can be obtained by mapping function as shown in Figure 2b.Since FY-3C is a LEO satellite, the signal delay here is not fully equivalent to the effects of the troposphere on the GEO SAR signal.Therefore, equivalent treatment [39] based on the GEO SAR and FY-3C satellite orbital parameters is required to calculate the tropospheric delay data on the GEO SAR signal propagation path.Every order ROCs can be obtained as shown in Table 1.We can find the first and second order ROC of FY-3C satellite and IGS are at same level.In the following, we mainly used FY-3C satellite data for more detailed analysis. It is noted that the measured data is not representative for the atmospheric status in China.The main work of this paper is to establish a GEO SAR signal model considering the influence of the troposphere, and theoretically analyze the influence of the troposphere.The measured data here are only used to verify the correctness of the tropospheric model, but are not employed to give any conclusion of tropospheric effects in China or a region based on a large number of measured data.It is noted that the measured data is not representative for the atmospheric status in China.The main work of this paper is to establish a GEO SAR signal model considering the influence of the troposphere, and theoretically analyze the influence of the troposphere.The measured data here are only used to verify the correctness of the tropospheric model, but are not employed to give any conclusion of tropospheric effects in China or a region based on a large number of measured data. The effects of background troposphere on GEO SAR imaging are related to the integration time except the azimuth shift.Although the troposphere is a nondispersive medium and does not affect imaging in slant range, the phase errors of GEO SAR at different wavelengths are different in the azimuthal imaging.The phase delay errors introduced by troposphere in different bands can be calculated by atmospheric refractive index profile data from FY-3C refractive index profile data, and q i is same for different bands since the troposphere is nondispersive medium.q i can be obtained by interpolation and fitting the raw refractive index profile data.According to the parameters in Tables 1 and 2, the azimuth signals in different bands (L, S, C, and X) and different integration times can be determined and peak sidelobe ratio (PSLR) and integral sidelobe ratio (ISLR) can be obtained after pulse compression processing.The azimuthal PSLR and ISLR of L, S, C, and X bands are simulated based on (22).The evaluation results are shown in Figure 3. Tropospheric errors can cause image defocusing for long integration time.The smaller the wavelength is, the greater the impact will be.The changes of the troposphere can also result in azimuthal image shifts that are independent of the wavelength and integration time.Instead, it only depends on the linear ROC in the troposphere. However, for the same integration time, GEO SAR with different wavelengths can reach different resolutions.The smaller the wavelength is, the higher the resolution will be.The following will analyze effects of various geometries configuration and wavelengths of GEO SAR on the troposphere for the same resolution.Table 3 shows the assessment of GEO SAR imaging of point target at different orbital positions for the L-band and X-band with low-inclination orbit and high-inclination orbit.The resolution is set as 10 m (other parameters are shown in Table 2).The image offset caused by troposphere is only related to the geometric configuration instead of the wavelength.However, due to the short wavelength of the X-band GEO SAR, less time is required to reach the same resolution of 10 m.Therefore, the X-band GEO SAR is less affected by the troposphere when the same resolution is required, and defocus occurs only in case of GEO SAR with low inclination.Under the same geometric configuration, the integration time of L-band system is nearly 2000 s, and the azimuth will be defocused due to the tropospheric influences.The point target azimuthal envelopes of L-band and X-band system in this case are shown in Figure 4.The effects of background troposphere on GEO SAR imaging are related to the integration time except the azimuth shift.Although the troposphere is a nondispersive medium and does not affect imaging in slant range, the phase errors of GEO SAR at different wavelengths are different in the azimuthal imaging.The phase delay errors introduced by troposphere in different bands can be calculated by atmospheric refractive index profile data from FY-3C refractive index profile data, and i q is same for different bands since the troposphere is nondispersive medium.i q can be obtained by interpolation and fitting the raw refractive index profile data.According to the parameters in Tables 1 and 2, the azimuth signals in different bands (L, S, C, and X) and different integration times can be determined and peak sidelobe ratio (PSLR) and integral sidelobe ratio (ISLR) can be obtained after pulse compression processing.The azimuthal PSLR and ISLR of L, S, C, and X bands are simulated based on (22).The evaluation results are shown in Figure 3. Tropospheric errors can cause image defocusing for long integration time.The smaller the wavelength is, the greater the impact will be.The changes of the troposphere can also result in azimuthal image shifts that are independent of the wavelength and integration time.Instead, it only depends on the linear ROC in the troposphere. (a) (b) However, for the same integration time, GEO SAR with different wavelengths can reach different resolutions.The smaller the wavelength is, the higher the resolution will be.The following will analyze effects of various geometries configuration and wavelengths of GEO SAR on the troposphere for the same resolution.Table 3 shows the assessment of GEO SAR imaging of point target at different orbital positions for the L-band and X-band with low-inclination orbit and highinclination orbit.The resolution is set as 10 m (other parameters are shown in Table 2).The image offset caused by troposphere is only related to the geometric configuration instead of the wavelength.However, due to the short wavelength of the X-band GEO SAR, less time is required to reach the Simulations of Turbulent Energy In this section, the effects of turbulence on GEO SAR imaging are analyzed by evaluating the azimuthal PSLR and ISLR.The amplitude and phase errors caused by turbulence are weak in nature and coupled with the random error of the system, making it difficult to accurately extract and reproduce.However, this random process can be described by spatial PSD and its energy.The effects of turbulence on imaging can be obtained by semiphysical simulation based on the turbulent energy and PSD shape. Firstly, we simulate the PSD mentioned in Section 2 to choose an appropriate PSD for our analysis.Assuming that , the surface temperature is 20.85 °C, the ground relative humidity is 76.8%, the inner scale 0 l is 5 cm, the outer scale L is 100 m, and the thickness of turbulence is 500 m.The distributions of aforementioned four PSD are shown in Figure 5.It can be found that the modified turbulence PSD has the obvious input zone, inertial zone, and dissipative zone, which is more in line with the actually observed turbulence distribution.Different regions of PSD represent different status of troposphere [1].The background troposphere mainly refers to the Simulations of Turbulent Energy In this section, the effects of turbulence on GEO SAR imaging are analyzed by evaluating the azimuthal PSLR and ISLR.The amplitude and phase errors caused by turbulence are weak in nature and coupled with the random error of the system, making it difficult to accurately extract and reproduce.However, this random process can be described by spatial PSD and its energy.The effects of turbulence on imaging can be obtained by semiphysical simulation based on the turbulent energy and PSD shape. Firstly, we simulate the PSD mentioned in Section 2 to choose an appropriate PSD for our analysis.Assuming that v d = 10 m/s, v p = 300 m/s, the surface temperature is 20.85 • C, the ground relative humidity is 76.8%, the inner scale l 0 is 5 cm, the outer scale L 0 is 100 m, and the thickness of turbulence is 500 m.The distributions of aforementioned four PSD are shown in Figure 5.It can be found that the modified turbulence PSD has the obvious input zone, inertial zone, and dissipative zone, which is more in line with the actually observed turbulence distribution.Different regions of PSD represent different status of troposphere [1].The background troposphere mainly refers to the slowly changing part due to the large-scale variation and corresponds to the input region.The tropospheric turbulence refers to the dramatic changing part due to the small-scale vortices and corresponds to the inertial region.Therefore, the modified PSD shown as ( 14) is used in the following analysis and only the inertial zone is considered. Remote Sens. 2019, 11, x FOR PEER REVIEW 15 of 24 slowly changing part due to the large-scale variation and corresponds to the input region.The tropospheric turbulence refers to the dramatic changing part due to the small-scale vortices and corresponds to the inertial region.Therefore, the modified PSD shown as ( 14) is used in the following analysis and only the inertial zone is considered. .In nature, the intensity of the tropospheric turbulence is related to the atmospheric status.The turbulence intensity is represented by the refractivity structure constant C 2 n , which is a function of altitude.Taking into account the changes of atmospheric humidity and water vapor content, C 2 n can be expressed as [61] C 2 n = 8.148 × 10 −56 v 2 rms h 10 e −h/1000 + 2.7 × 10 −16 e −h/1500 +C 0 e −h/100 + 6.4048 × 10 −12 h −11/6 N 2 wet (31) where, C 0 = 3.9 × 10 −12 m −2/3 , h is the height of troposphere, v rms = v 2 g + 30.69v g + 348.91(m/s) is the RMS value of the wind speed along the vertical path, and the typical value is 21 m/s [62].N wet can be written as 5000 < h ≤ 10, 000 28.8 − 0.00556 h − 10 4 10 4 < h < 1.5 × 10 4 (34) where t 0 is the ground temperature and u 0 is the ground refractive rate. We assume that the turbulent thickness is 1000 m.If the troposphere is divided into five layers, then At this time, the distribution of C 2 n and the percentage of energy in each layer are shown in Figure 6.The percentage of turbulent energy at the height of 200 m from the ground is above 85%.Therefore, the following analysis is about the impact of only one layer of turbulence. The intensity of tropospheric turbulence can be expressed as the refractive index variance  2 n , whose unit is 2 cm [40].Figure 7 shows the random phase power spectra for  = The percentage of turbulent energy at the height of 200 m from the ground is above 85%.Therefore, the following analysis is about the impact of only one layer of turbulence. The intensity of tropospheric turbulence can be expressed as the refractive index variance σ n 2 , whose unit is cm 2 [40].Figure 7 shows the random phase power spectra for σ n 2 = 0.1 cm 2 and σ n 2 = 3.0 cm 2 on the L-band and the amplitude and phase fluctuations produced by the phase screen method. Therefore, the following analysis is about the impact of only one layer of turbulence. The intensity of tropospheric turbulence can be expressed as the refractive index variance  2 n , whose unit is 2 cm [40].Figure 7 shows the random phase power spectra for  = We can find that the amplitude and phase fluctuations become more obvious when σ n 2 increases. The corresponding values of C n 2 , A NV and P NV can be obtained by (31) and (30). Simulation of Point Target From the above analysis, it can be seen that the tropospheric turbulence may cause the amplitude and phase of the signal to fluctuate, resulting in the deterioration of GEO SAR imaging quality.Therefore, for different integration time and wavelengths, impacts of tropospheric turbulence on imaging are analyzed by evaluating point target imaging.The system parameters of GEO SAR are shown in Table 2. Actually, in nature, the typical values of Simulation of Point Target From the above analysis, it can be seen that the tropospheric turbulence may cause the amplitude and phase of the signal to fluctuate, resulting in the deterioration of GEO SAR imaging quality.Therefore, for different integration time and wavelengths, impacts of tropospheric turbulence on imaging are analyzed by evaluating point target imaging.The system parameters of GEO SAR are shown in Table 2. Actually, in nature, the typical values of C 2 n are generally between 10 −17 m −2/3 (weak turbulence) and 10 −13 m −2/3 (strong turbulence).Assuming that C 2 n = 10 −13 m −2/3 , which is the value in extreme unstable atmospheric conditions, the amplitude fluctuations I D TF and phase fluctuations φ D TF , which have effects on imaging, can be obtained by phase screen theory.The averages of PSLR and ISLR with Monte Carlo simulation of the L-, C-, and X-band point target imaging for different integration time are analyzed in Table 4.We found that tropospheric turbulence has little effect on the L/C band in the inertial region, while has a slight effect for X-band.Since the turbulence of the inertial region in nature is not great enough to cause serious influence, in order to analyze the influence of different turbulent energy, wavelength and integration time, C 2 n needs to enlarged artificially.We give the results in Appendix A. Table A1 presents the averages of PSLR and ISLR with Monte Carlo simulation of the L-, C-, and X-band point target imaging.Table A2 shows the Monte Carlo simulation results of target imaging at 100 s, 150 s, and 300 s with different wavelengths when σ 2 n = 0.1 cm 2 .According to Table A1, it can be seen that the larger the turbulence intensity is, the more severe the PSLR and ISLR will be.However, for the L-band, the PSLR does not change much.For the same turbulence intensity, the higher the signal frequency is, the worse the PSLR and ISLR will be. Figure 9 shows the azimuthal profiles of different σ 2 n for L-band.The red line represents the azimuthal profiles without tropospheric turbulence.According to Table A1, it can be seen that the larger the turbulence intensity is, the more severe the PSLR and ISLR will be.However, for the L-band, the PSLR does not change much.For the same turbulence intensity, the higher the signal frequency is, the worse the PSLR and ISLR will be. Figure 9 shows the azimuthal profiles of different  2 n for L-band.The red line represents the azimuthal profiles without tropospheric turbulence.It can be seen that, for the L-band, with the increase of  2 n , the PSLR has a slight deterioration while the ISLR has a significant deterioration, which is consistent with the experimental data. Discussion From the previous analysis we can see that the order of magnitude of 2 It can be seen that, for the L-band, with the increase of σ 2 n , the PSLR has a slight deterioration while the ISLR has a significant deterioration, which is consistent with the experimental data. Discussion From the previous analysis we can see that the order of magnitude of C 2 n is 10 −8 and the fluctuations are not obvious when σ 2 n = 0.1 cm 2 .It is 5 to 9 orders of magnitude greater than the turbulence in nature, whose C 2 n lies between 10 −17 m −2/3 (weak) and 10 −13 m −2/3 (strong turbulence).In the extreme unstable atmospheric conditions, C 2 n can only achieve up to 10 −12 magnitude [63], but still much less than 10 −8 . Therefore, the tropospheric turbulence of inertial subrange basically has no effect on the imaging in nature, only has slightly effect on X-band as shown in Table 4.It is noted that the atmospheric turbulence is dependent on the hour of the day, with low relative disturbances at night hours and maximum turbulence around noon.We use C 2 n which is 10 −17 m −2/3 ~10 −12 m −2/3 in nature to indicate turbulent energy.Therefore, it has included all-day turbulence distribution.Since the strongest turbulence has no obvious effect on GEO SAR according to the subsequent analysis, no analysis is performed for specific time interval. In fact, the azimuthal SAR signal impacted by turbulence is Considering the turbulence is very weak in microwave band, only the phase fluctuation φ(u) need to be considered.The SAR signal can be rewritten using Taylor expansion as Then it can be transformed into the frequency domain [37] to complete the analysis of turbulence effects: where, ⊗ is spectral convolution, Φ( f ) is the power spectrum density of the random phase, and f is the azimuthal frequency.Due to the long integration time of GEO SAR (generally above 100 s), the main lobe of S 0 ( f ) is within 0.01 Hz.Considering the relationship of azimuthal frequency and spatial frequency, we can write Φ( f ) as [64] Φ where, V b f is the beam-foot velocity. As the tropospheric turbulence occurs mainly near the surface, where the speed is generally below 30 m/s.The sum velocity of wind speed and V b f is still at the level of V b f (as shown in Figure 10).Here we employ the value of V b f for analysis.We can get the expression of V b f in the inertial zone: So cutoff frequency of the power spectrum is f c ≥ V b f /L 0 .When the integration time is above 100 s, we can get V b f /L 0 0.01Hz.Combined with (40), the turbulence only affects the side lobe rather than the main lobe and the degree of influence depends on the turbulent energy. When C 2 n = 10 −13 m −2/3 , we can get the A NV and P NV in nature: A NV = 6.9448 × 10 −5 P NV = 6.7711 × 10 −5 (41) So the ISLR can be expressed as where 10 log(Θ ISLR,0 ) = −9.7dB is the ideal ISLR.In the natural, A NV and P NV are too much smaller, as shown in (41), and are sure to be ignored. ( ) ( ) ( ) ( ) ( ) where,  is spectral convolution, ( )  f is the power spectrum density of the random phase, and f is the azimuthal frequency.Due to the long integration time of GEO SAR (generally above 100 s), the main lobe of ( ) Sf is within 0.01 Hz.Considering the relationship of azimuthal frequency and spatial frequency, we can write ( )  f as [64] ( ) (39) where, bf V is the beam-foot velocity. (a) (b) As the tropospheric turbulence occurs mainly near the surface, where the speed is generally below 30 m/s.The sum velocity of wind speed and bf V is still at the level of bf V (as shown in Figure 10).Here we employ the value of bf V for analysis.We can get the expression of bf V in the inertial zone:    As mentioned above, the turbulence energy in nature is very small and the influence on GEO SAR in low-frequency bands can be ignored. Here, it is noted that variations of the tropospheric measures can reach a standard deviation of 0.3-0.5 cm [65,66] which cannot be ignored in X-band systems.This conclusion seems to be mismatched with the one here, but actually it is just a verification of our research from another aspect.In the measurement, the tropospheric variations consist of both the slow-varying tropospheric component and the fast-varying turbulent component.However, in this section, the standard deviation of the path delay is only related to the tropospheric turbulence.But when considering the slow-varying troposphere together, i.e., background troposphere, the total path delay standard deviation caused by troposphere reaches 0.843 cm (considering the slow-varying component in Table 1 and fast-varying turbulence by Figure 6 and ( 13)).Thus, the varying troposphere will be sure to affect the X-band signal.Besides, it is calculated that the total path delay standard deviation from IGS zenith delay data can also reach 0.56 cm, which can also verify our conclusions. Conclusions GEO SAR has the characteristics of long synthetic aperture time and large observation range, and the atmosphere changes more severely with time and space.In this paper, we model and analyze the tropospheric influences on GEO SAR, including background troposphere and turbulence. For the background tropospheric influences, the changing troposphere causes the GEO SAR image to shift and the offset is only related to the first-order ROC instead of the orbital configuration and the wavelength.The high order of phase error will accumulate within the long integration time, which results in image defocusing.Through the theoretical analysis and the verification of FY-3C satellite data and IGS data, we can get two important conclusions.Firstly, the shorter wavelength is, the greater tropospheric ROCs will be and the higher azimuth resolution will be required, which will result in more serious deterioration on GEO SAR image.Secondly, when the azimuthal resolution is fixed, the smaller the beam-foot velocity is and the longer the integration time is, the more serious the deterioration will be. For the tropospheric turbulence, it will produce the random amplitude and phase fluctuations which results in the image defocusing.We mainly analyze the effects of turbulence on GEO SAR Figure 1 . Figure 1.Sketch map of geosynchronous orbit synthetic aperture radar (GEO SAR) signal propagation in the troposphere. RFigure 1 . Figure 1.Sketch map of geosynchronous orbit synthetic aperture radar (GEO SAR) signal propagation in the troposphere. . The amplitude fluctuations I D i TF and phase fluctuations φ D i TF of the signal can be obtained by calculating the tropospheric transfer function D i TF . Figure 2 . Figure 2. Tropospheric signal delay based on measured data: (a) atmospheric refractive index profile data from FY-3C and (b) tropospheric zenith delay data from IGS. Figure 2 . Figure 2. Tropospheric signal delay based on measured data: (a) atmospheric refractive index profile data from FY-3C and (b) tropospheric zenith delay data from IGS. Figure 4 . Figure 4.The azimuthal profiles of point target imaging of low-inclination GEO SAR at perigee.(a) L-band.(b) X-band. Figure 4 . Figure 4.The azimuthal of point target imaging of low-inclination GEO SAR at perigee.(a) L-band.(b) X-band. Remote Sens. 2019 , 24 Figure 6 .distribution of 2 nC Figure 6.The distribution of 2 n C and the percentage of energy in each layer. n on the L-band and the amplitude and phase fluctuations produced by the phase screen method. Figure 6 . Figure 6.The distribution of C 2 n and the percentage of energy in each layer. nFigure 7 . Figure 7.The power spectrum density, the amplitude and phase fluctuations at different turbulence intensities.(a)  = 22 0.1cm n Figure 7 . Figure 7.The power spectrum density, the amplitude and phase fluctuations at different turbulence intensities.(a) σ n 2 = 0.1 cm 2 ; (b) σ n 2 = 3.0 cm 2 .(Top: the power spectrum; bottom: amplitude and phase fluctuations.) Then the Monte Carlo simulation is carried out that the fluctuations are generated and measured by A NV and P NV .The results are shown in Figure 8. C n 2 increases with the increase of σ n 2 , which indicates the change of turbulence intensity.Meanwhile, the A NV and P NV increase with the increase of turbulence intensity, indicating that the amplitude and phase fluctuations of signal become serious with the increase of tropospheric turbulence.Remote Sens. 2019, 11, x FOR PEER REVIEW 17 of 24 Then the Monte Carlo simulation is carried out that the fluctuations are generated and measured by NV A and NV P .The results are shown in Figure 8. 2 nC increases with the increase of  2 n , which indicates the change of turbulence intensity.Meanwhile, the NV Aand NV P increase with the increase of turbulence intensity, indicating that the amplitude and phase fluctuations of signal become serious with the increase of tropospheric turbulence. Figure 8 . Figure 8. Changes of 2 n C , NV A and NV P with  2 n .(blue: NV A ; black: NV P ; red: 2 n C ). Figure 10 . Figure 10.Beam-foot velocity variations of GEO SAR with different orbital configurations.(a) Different orbital inclinations and (b) different eccentricity in case of 18° inclination. with(40), the turbulence only affects the side lobe rather than the main lobe and the degree of influence depends on the turbulent energy.When shown in(41), and are sure to be ignored. Figure 10 . Figure 10.Beam-foot velocity variations of GEO SAR with different orbital configurations.(a) Different orbital inclinations and (b) different eccentricity in case of 18 • inclination. Table 2 . System and orbit parameters of GEO SAR. Table 3 . Evaluations of point target imaging of GEO SAR with different orbital configurations at different wavelengths (P: perigee; E: equator; H: high-inclination; L: low-inclination). Figure 8. Changes of C 2 n , A NV and P NV with σ n 2 .(blue: A NV ; black: P NV ; red: C 2 n ). Table 4 . The imaging results in extreme unstable atmospheric conditions for different bands and integration time.
v3-fos-license
2018-06-03T00:57:29.838Z
2013-11-15T00:00:00.000
41078933
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.ijcasereportsandimages.com/archive/2013/003-2013-ijcri/002-03-2013-matthews/ijcri-00203201322-matthews.pdf", "pdf_hash": "9d20c64b9490442cbe61a1beb7f16182e606c6f5", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:106", "s2fieldsofstudy": [ "Medicine" ], "sha1": "9d20c64b9490442cbe61a1beb7f16182e606c6f5", "year": 2013 }
pes2o/s2orc
Combination therapy with vitamin D 3 , progesterone , omega 3 fatty acids , and glutamine reverses coma andimproves clinical outcomes in patients with severetraumatic brain injuries : A case series of three patients Combination therapy with vitamin D3, progesterone, omega­3 fatty acids, and glutamine reverses coma and improves clinical outcomes in patients with severe traumatic brain injuries: A case series of three patients Leslie R Matthews, Omar K Danner, Y A Ahmed, Diane M Dennis­Griggs, Alexis Frederick, Clarence Clark, Ronald Moore, Wilson DuMornay, Ed W Childs, Kenneth L Wilson Disclaimer: This manuscript has been accepted for publication in International Journal of Case Reports and Images (IJCRI). This is a pdf file of the provisional version of the manuscript. The manuscript will under go content check, copyediting/proofreading and content formating to conform to journal's requirements. Please note that during the above publication processes errors in content or presentation may be discovered which will be rectified during manuscript processing. These errors may affect the contents of this manuscript and final published version of this manuscript may be extensively different in content and layout than this provisional PDF version. ABSTRACT Introduction: Traumatic brain injury (TBI) is a major public health problem and a leading cause of death and disability in the United States. Management of patients with TBI has changed very little over the last 20 years. Case Series: A case series of three patients with severe TBIs who were aggressively treated with vitamin D3, progesterone, omega 3­fatty acids, and enteral glutamine for six weeks, termed neuroceutical augmentation for traumatic brain injury (NATBI), with very favorable outcomes. Conclusion: A large clinical study trial using these four supplements (NATBI) together is warranted. INTRODUCTION Traumatic brain injury (TBI) is a major public health problem which affects over 1.7 million people annually with 275,000 hospitalizations and 52,000 deaths in the U.S. according to the CDC [1]. The medical cost for treating TBI patients in the United States in 2010 was $76.5 billion and rising annually [1]. Primary causes for TBI include the following: motor vehicle crashes, falls, assaults and sports or recreationrelated injuries (concussions). Finding the right treatment to reduce mortality rates and improve the clinical outcomes in TBI patients has been elusive. Management of patients with TBI has changed very little over the last 20 years. We present a case series of three patients with severe TBIs who were aggressively treated with vitamin D3, progesterone, omega3 fatty acids and enteral glutamine for up to six weeks, termed neuroceutical augmentation for traumatic brain injury (NATBI), with very favorable outcomes [2][3][4][5]. NATBI protocol works on multiple levels and neuroprotective pathways in TBI patients by down regulating cytokine production, preventing oxidative stress (free radical oxygen formation), decreasing cerebral edema, and inflammation, thus limiting secondary injury brain injury in contradistinction to progesterone therapy alone (Protect III study) [3][4][5]. In addition, our NATBI regimen is relatively inexpensive, safe, and very effective at reducing brain and systemic inflammation postinjury. Advancements in the treatment of TBI requires great understanding of the biochemical mechanisms of the brain during a normal resting state as well as the metabolism after a severe traumatic event. Brain metabolism is markedly altered during TBI. After the initial insult to the brain, the brain's metabolism is altered and can increase up to 140% of its normal metabolism. Vitamin D (a steroid hormone) and omega3 fatty acids (an essential fatty acid) are both very powerful antiinflammatory agents that reduce cerebral edema and swelling. Glutamine becomes an essential amino acid during stress and produces the extra glucose (via the Cori cycle) that is used by the injured brain and the extra glucose used by the immune response system to fight off infection during stress. Progesterone (also a steroid hormone) is a neuroprotector of injured brain cells and potentiates the effect of vitamin D. These agents are all immune modulators which work synergistically to prevent secondary brain injury by limiting or decreasing inflammation, an increasing wellrecognized cause of ongoing brain swelling after a primary injury. They are also neuroprotectors that makes the neurons more resistant to stress, ischemia, hypothermia, hyperthermia, hypoglycemia, hyperglycemia, hypotension, and hypertension. Immune modulation with nutritional supplements is a rapidly advancing field with a very promising future in treating TBI as well as other critically injured/ill patients. Patients in a coma with severe TBI (Glascow Coma Score <8) who were admitted to a Level I trauma center were evaluated in a prospective observational study. Patients were treated with a neuroceutical combination of vitamin D3, omega3 fatty acids, progesterone, and glutamine initially via a nasogastric tube and later orally for six weeks. Primary outcomes were mortality rate and return to recovery which was defined as a Glascow Coma Score (GCS) of 10 or greater. CASE REPORT Case 1: Patient 1 is a 17yearold female restrained driver, who was involved in a single car, multiple rollover motor vehicle crash with a 10 foot ejection presented to the Emergency Department intubated and unresponsive with a GCS of five out of fifteen. Her physical exam was notable for a blood pressure of 105/56 mmHg, pulse of 87 beats/min, temperature of 37.7 C, respiratory rate of 20, and oxygen saturation of 100% on the ventilator. Her secondary survey revealed unequal pupils with discordant reactivity. Her right pupil was 8 mm and nonreactive to light and her left pupil was 3 mm and reactive to light. Ominously, she was noted to have decerebrate posturing of both the upper and lower extremities bilaterally. On further examination, a 5 cm laceration to the right lower anterior thigh was identified and repaired. Her Focused Assessment Sonogram for Trauma (FAST) exam was negative. A computed tomography (CT) scan of her head revealed multifocal, punctuate brain hemorrhages, consistent with a diffuse axonal injury (DAI) (Figure 1, her initial head CT scan). CT scans of the cervical spine, chest, abdomen, and pelvis revealed bilateral spinous process fractures of C7, T1, and T2, a mid sternal body fracture, bilateral pulmonary contusions, and a distal right clavicle fracture. She also sustained a cardiac contusion associated postinjury arrhythmias, which were treated conservatively. An external ventricular drainage device was placed by neurosurgery to help monitor and manage her intracranial pressure and maintain her cerebral perfusion pressure within acceptable limits. Upon her admission to the surgical intensive care unit (SICU), she was started on a regimen of vitamin D3 50,000 international units (IU), progesterone 20 mg, omega3 fatty acids 2 grams (Loveza), and enteral glutamine 20 grams via her nasogastric tube (NGT). Her decerebrate posturing resolved in less than 24 hours. By hospital day 3, she was able to follow simple commands while off sedation. Her GCS and clinical status continued to improve and she was able to be extubated on hospital day 9. She was discharged to inpatient rehabilitation on hospital day 18. Although her GCS improved to 12 prior to rehab transfer, some residual right sided weakness remained. She rapidly progressed to a GCS of 15 during her recovery and was discharged home from inpatient rehab doing well after 1 month. In less than 3 months after her initial insult, she has returned to school full time and is completing her senior year of high school with her right sided weakness essentially resolved. Case 2: Patient 2 is a 31yearold male who was brought to the Emergency Room by ambulance due to altered mental status (AMS) and a witnessed seizure following an assault with suspected head trauma. He suffered blunt force trauma to his head secondary to being struck with a brick. His primary and secondary examinations were unremarkable with the exception of a GCS of 9. His vital signs were within normal limits and his hemodynamics were stable. A CT scan of the head revealed a bilateral frontal intraparenchymal hemorrhage, with left frontal, parietal, temporal subdural hematomas (SDH), a left frontal subarachnoid hemorrhage (SAH) with a 7 mm right to left midline shift, cerebral edema and effacement of left frontal horn and right temporal hematoma ( Figure 2, his initial CT scan of the head). CT scan of the cervical spine, chest, abdomen, and pelvis were unremarkable. On tertiary survey, his past medical history was noted to be significant for human immunodeficiency virus (HIV positive) infection, hepatitis A, syphillis, shingles, and alcohol abuse. His CD4 Tcell count on admissions was 46 (normal>500), consistent with a diagnosis of acquired immunodeficiency syndrome. His mental status declined rapidly during his initial evaluation and management period in the ED, and he was taken to the operating room by neurosurgery for an emergent depressive hemicraniectomy. Due to the severity of his head injury and his multiple comorbidities, the patient's prognosis was deemed to be very poor by the neurosurgery service. Upon admission to the SICU, his GCS was 3T out of 15. He was subsequently started on our NATBI protocol, with a regimen of vitamin D3, progesterone, loveza, and glutamine immediately via orogastric tube. His postoperative course was complicated by acute respiratory distress syndrome (ARDS), ventilator associated pneumonia (VAP) and acute sepsis. However, his condition improved with intravenous antibiotics, ventilatory management, and nutritional support. His GCS continually improved over the course of his ICU stay and he was able to be discharged to a longterm rehabitation facility on hospital day 18 with a GCS score of 11T breathing spontaneously via his tracheostomy. One month after discharge, he was evaluated in the trauma clinic and was noted to have a GCS of 15. He was tolerating a regular diet, ambulating without assistance, and adjusting very well to home life. The only deficits reported were some memory loss, which he noted was improving on a daily basis. Case 3: Patient 3 is a 23yearold female who was an unrestrained passenger involved in a single car MVC (hit a tree) with a fatality at the scene. Patient had a GCS of 3 out of 15 in the field with decerebrate posturing, according to emergency medical personnel. She was immediately intubated by paramedics at the scene. Physical examination in the emergency department was notable in that the patient was initially hemodynamically unstable with a blood pressure of 90/60 mmHg, pulse of 128, respiratory rate of 18, and oxygen saturation of 100%. She required four units of packed red blood cells plus three liters of isotonic crystalloid to become hemodynamically stable. Her pupils were 5 mm and sluggishly reactive to light bilaterally. She had massive facial edema and swelling. Her endotracheal tube was intact and in good position and confirmed with positive endtidal CO2. CT scans of the head, face and cervical spine revealed the following: Diffuse SAH over the left frontoparietal lobes; cerebral edema; SDH; DAI; transtentorial herniation; tonsillar herniation (Figure 3); fracture of the right mandibular angle, body, and parasymphysis; but no cervical spine fracture or dislocation. CT scans of the chest, abdomen, and pelvis showed the following: a mid sternal fracture; bilateral, multiple rib fractures; left pulmonary contusion; Grade 1 splenic laceration; and a left acetabular fracture. Xrays of the lower extremities revealed a right distal tibia/fibula fracture, which was stabilized by orthopedic surgery. An EVD monitor was placed and revealed that the patient had an opening cerebral perfusion pressure of 27 cm H2O. Standard protocol for patients with elevated ICP was initiated. Patient was admitted to the SICU and started on our NATBI protocol consisting of vitamin D3, progesterone, omega3 fatty acid, and glutamine. She had a prolonged hospital course which was complicated by refractory elevation of her ICP's, prolonged coma with a depressed GCS of 4T to 8T, ventilatorassociated pneumonia (VAP), urinary tract infection (UTI) with urosepsis, candidemia with fungal sepsis, and acute renal insufficiency. Patient was transferred on hospital day #109 with a GCS of 8T and breathing spontaneously on tracheostomy collar to a longterm rehab facility. She continued to make satisfactory improvement during rehab, and was discharged from the longterm rehabilitation facility to home six weeks later with a GCS of 12 to 13, talking, following commands, and eating with assistance. Patient was not ambulating when discharged and will require extensive ongoing physical therapy. She was lost to follow after discharge from the rehabilitation center. All three patients, who had a very poor prognosis, survived their severe TBI and had a return of recovery to a GCS of 15 out of 15. Six months follow up revealed that all three patients shortterm memory lost had resolved. DISCUSSION Emerging understanding of NATBI has a very promising future in the treatment of TBI. Vitamin D (classified as a vitamin) is actually a steroid hormone with pleiotrophic effects, which includes its action as immune modulator [6]. Of note, receptors for vitamin D are located on every cell and tissue of the human body including brain tissue. Vitamin D has been discovered to be a very important in immunomodulation, regulation of inflammation and cytokines, such as IL1 beta and tumor necrosis factoralpha (which increases brain cell edema), cell proliferation, cell differentiation, apoptosis, and angiogenesis, in addition to the traditional calcium, magnesium, phosphate homeostasis, and bone formation. Consequently, deficiency of vitamin D affects more than 70% of the United States general population, and has been found to be associated with worsening of many inflammatory conditions. [7][8][9]. Even more important to brain health, vitamin D binds receptors in brain cells help to produce heat shock proteins (HSP), which act as chaperone proteins that make the cells more resistant to stress [10,11]. Heat shock proteins help brain cell proteins maintain their 3D shape/conformation during stress, ischemia, hyperthermia, hypothermia, hyperglycemia, hypoglycemia, hypertension, and hypotension. Loss of 3D protein shape by neuronal cells results in loss of cellular function, which predisposes the brain cell to apoptosis and cell death [12]. All cells in the human body are capable of producing HSP. Therefore, low vitamin D levels may be associated with lower levels of HSP. Thus, vitamin D deficient neuronal cells are less likely to survive a stressful situation event, such as trauma or ischemia to the brain. HSP has antiapoptotic (prevents programmed brain cell death) and anti inflammatory properties, which also decreases cerebral edema [13,14]. HSP play a very important and central role in brain cell survival and resiliency after a traumatic brain injury. Recent research has shown that progesterone is a neuroprotectant that works synergistically with vitamin D in protecting the nerve cell from injury. Progesterone has been shown to protect the brain from traumatic injury and is now in Phase III clinical trials. However, studies have shown that progesterone's beneficial effects can be ameliorated in vitamin D deficient patients. Vitamin D can modulate neuronal apoptosis, trophic factors, inflammation, oxidative stress, excitotoxicity, and myelin and axon repair (Hua F, Stein DG Horm Behav 2012). Low dose vitamin D hormone plus progesterone has been demonstrated by Hua et al. to improve performance in acquisition more effectively than progesterone alone, suggesting that a lower dose of activated vitamin D may be optimal for combination therapy. Their data support that the combination of progesterone and vitamin D is more effective than progesterone alone in preserving spatial and reference memory (Hua F, Stein DG Horm Beh 2012). According to the CDC, up to 80% of the United States population is omega3 fatty acid deficient. As 30% of human brain tissue is made up of omega3 fatty acid, emerging evidence suggests that supplementing TBI patients with omega3 fatty acid may help the injured brain to repair itself. This makes omega3 fatty acid a very essential adjunct in the treatment of severe TBI. A broken brick wall is repaired with bricks and not straw. The same analogy applies to the brain. It needs omega3 fatty acid to heal properly. Also, omega3 fatty acids are antiinflammatory and works very well with vitamin D3 in downregulating inflammation, which counteracts cerebral edema/swelling. Thus, vitamin D deficiency and omega3 fatty acids deficiency may work synergistically to worsen outcome in patients with TBI. As both are very prevalent in the general U.S. population, and even more pronounced in critically ill patients with TBI, atrisk patients should be routinely supplemented with vitamin D and omega3 fatty acids, in our opinion. In fact, vitamin D levels less than 17.8 ng/mL has been shown to be associated with a 28% increased allcauses risk of premature death [6]. Equally important, omega3 fatty acid deficiency is associated with over 96,000 preventable deaths in the U.S. annually according to a recent report from the CDC. Therefore, nutritional deficiencies of these two supplements can potentially have a grave impact on the clinical outcomes of TBI patients. On the other hand, glutamine is a nonessential amino acid that becomes a conditionally essential amino acid during periods of stress. Glutamine is the most abundant amino in human skeletal muscle. During stress, glutamine is used as the primary energy source for rapidly dividing cells and is used by the liver to make glucose via gluoneogenesis to supply glucose to the brain, red blood cells, enterocytes of the small bowel, and the cell of the immune system. Glutamine also works synergistically with vitamin D3 to increase HSP70 [15,16]. Thus, adequate levels of glutamine and vitamin D3 are needed to produce a optimal concentrations of HSP70 in brain cells, which potentially work together to protect the injured brain from ongoing insult or injury. Of note, there were no side effects or complications from treating these three patients with our NATBI therapeutic regimen using the combination of supplements as noted above. Increasing data suggests that each supplement in the NATBI protocol is essential to obtaining optimal clinical outcomes in severe TBI patients. This novel approach (NATBI protocol) to treating TBI patients works by downregulating multiple inflammatory response pathways which produces cerebral edema, upregulating HSP which helps injured brain cells survive stress of any kind, and by helping the brain to repair itself with omega3 fatty acid. CONCLUSION We have reported a case series of three patients with very severe TBI's who were managed with vitamin D3, progesterone, omega3 fatty acids, and glutamine. All three patients were presented in a coma, and had very poor and grave prognoses based on their CT scan and neurosurgical consultation/recommendations. They are now well adjusted and have returned to their mental baselines with minimal longterm affects of TBI, other than shortterm memory loss which is rapidly improving. A large clinical study trial using these four neuroceutical supplements is warranted. Our group is the first to report in literature the multicomponent therapeutic regimen of vitamin D3, progesterone, omega3 fatty acids, and glutamine as a combination therapy for moderate and severe TBI treatment, which we have termed neuroceutical augmentation for TBI (NATBI). A large clinical study trial using these four supplements together is warranted. The potential for improving clinical outcomes and potentially decreasing healthcare costs associated with TBI patients is limitless. ********* Author Contributions Leslie R Matthews -Substantial contributions to conception and design, Acquisition of data, Analysis and interpretation of data, Drafting the article, Revising it critically for important intellectual content, Final approval of the version to be published Omar K Danner -Acquisition of data, Analysis and interpretation of data, Drafting the article, Revising it critically for important intellectual content, Final approval of the version to be published Y A Ahmed -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Diane M DennisGriggs -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Alexis Frederick -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Clarence Clark -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Ronald Moore -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Wilson DuMornay -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Ed W Childs -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Kenneth L Wilson -Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Guarantor The corresponding author is the guarantor of submission.
v3-fos-license
2019-03-18T14:04:16.305Z
2018-11-16T00:00:00.000
81097058
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=88534", "pdf_hash": "810db5be15b0f5004d4cbd03c8556d0c86e719aa", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:107", "s2fieldsofstudy": [ "Medicine" ], "sha1": "810db5be15b0f5004d4cbd03c8556d0c86e719aa", "year": 2018 }
pes2o/s2orc
Effect of the Routine Varicella Immunization on Herpes Zoster in Japan in the First Half of the Year In Japan, herpes zoster is not monitored officially or nationwide. Recently, the databases of all electronic medical claims nationwide (NDBEMC) have been available for research. We use NDBEMC from April 2011 to March 2015. To evaluate the effects of initiation of routine immunization for varicella in children, we regressed the number of herpes zoster patients on the dummy variable for the routine immunization for varicella in children with and without a linear time trend. The estimated coefficient for the routine immunization for varicella was 0.5157 and its p-value was 0.001. However, if the time trend was added as an explanatory variable, the estimated coefficient for the routine immunization for varicella changed to be −0.039 and its p-value was 0.384. It means that the routine immunization for varicella was 7.8% higher after introduction than before. However, it was presumed to reflect such an upward trend. Introduction Herpes zoster, a common infectious disease related with varicella-herpes zoster virus, especially affects elderly people [1]- [10].It causes a painful, blistering rash and can lead to postherpetic neuralgia, a persistent painful complication.However, in Japan, the Law Related to the Prevention of Infectious Diseases and Medical Care for Patients of Infections (The Infectious Diseases Control Law) has not included this disease in its surveillance.The official and nationwide epi- demiology of herpes zoster therefore remains unclear.Some few local, small, but long-term studies have examined herpes zoster.They provide some valuable evidence related to herpes zoster [6]. Recently, in Japan, all electronic medical claims nationwide (National Database of Electronic Medical Claims (NDBEMC)) have been disclosed as "Data of Medical Claims and Health Check-Ups for Metabolic Syndrome" from the Ministry of Health, Labor and Welfare.It covered 98.4% of all medical claims in 2015 [11].All doctors must record a diagnosis on medical claim.Therefore, it must necessarily constitute the most reliable data source.This paper presents examination of the epidemiology of herpes zoster using NEBEMC. In October 2014, routine immunization for varicella was introduced in Japan. Earlier studies predicted and subsequently found that immunization for varicella raises the prevalence of herpes zoster because of diminished opportunities to boost the immunity of elderly people [3] [12] [13] [14] [15].However, other studies found no significant effect [16] [17] [18].One mathematical model implied that its effect depends on the situation.Some cases showed no effect [19].Unfortunately, such an effect has been confirmed in Japan yet to date.Therefore, this study was conducted to estimate its effect on the epidemic of herpes zoster in the first half of the year using NDBEMC. Data and Study Period From electronic medical claims data, NDBEMC can count patients who have been diagnosed as having the herpes zoster, excluding suspected cases.In May 2015, the coverage of electronic medical claims was approximately 98.4% of all medical claims.NDBEMC data were those of April 2009 to March 2015.However, in 2009 and 2010, the coverage was, respectively only 52.1% and 81.4%.Conversely, in April 2011, it increased to 93.1% [11].Therefore, we omit the low coverage period until March 2011.Then we adjust the number of patients by the coverage rate. It is noteworthy that the number of patients in each month in NDBEMC should be regarded as the prevalence rate, not as the incidence, because the symptoms caused by herpes zoster typically continue for more than one month.For that reason, the same patients might appear in NDBEMC over several months.Herpes zoster is a common disease among elderly people.Therefore, we particularly focused 65 years old or older persons. Statistical Analysis We estimate the simple equation as where, r t represents the prevalence rate of herpes zoster per 1000 population for people older than 65 years old divided by the coverage rate of NDBEMC, d t is a Journal of Biosciences and Medicines dummy variable for the initiation of the routine immunization for varicella: it is one from October 2014 and otherwise zero.Also, m it is a dummy variable that represents the month. Moreover, we add a linear trend to explanatory variables in Equation (1) as where T t represents a linear time trend. Ethical Considerations This study used only anonymous data that had been de-linked from individual initiation of the routine immunization for varicella was 0.5157.Its p-value was 0.001.The coefficient of determination was 0.6195. Results and Discussion If we add the time trend as an explanatory variable into the estimation equation, then the estimated coefficient of linear trend was 0.023.Its p-value was less than 0.0004.However, the estimated coefficient for the initiation of the routine immunization for varicella was −0.039.Its p-value was 0.384.The fitness is quite high.The coefficient of determination was 0.9654. An earlier study conducted in Japan [10] found the incidence rate of herpes zoster in people older than 60 years old as around 7 -8 per 1000 population in 1997-2011 in Miyazaki prefecture.The prevalence rate found in our study, 6.621, was comparable, but it was lower than the incidence in the US, which was 11.12 -15.1 for elderly people older than 60 -65 years old during 1992-2010 [18] [19]. In the US, routine varicella immunization for children had already been introduced in 1996.For that reason, those incidences might have been affected.If we adopt the varicella immunization effect on herpes zoster incidence in the US as 39%, referring to results from an earlier study [18], and adjust those numbers, then the incidence of herpes zoster before the initiation of varicella immunization for children is expected to be 8.1 -10.8.Despite such an adjustment, the incidence of herpes zoster in Japan by NDBEMC is apparently lower than that in the US. Results show a higher value by 0.5157 after the introduction of routine immunization for varicella than that of the period before introduction.Its magnitude was 7.8% higher than in the before period, evaluated at its average.Therefore, we can infer that routine immunization for varicella raises the prevalence of herpes zoster significantly. A similar earlier study [18] estimated its effect as 39%.Our result is apparently much smaller.This discrepancy might result from the evaluation timing for initiation.We set it earlier and with a shorter duration.The effect might be larger than ours if we extend the study period. However, as shown in Figure 1, the prevalence has an apparently upward trend even in the period before.Incidentally, if we add the trend into the estimation equation as an explanatory variable, then the dummy variable for the routine immunization for varicella is not significant anymore.Therefore, our obtained significant effect of routine immunization of herpes zoster on herpes zoster prevalence without a trend was presumed to reflect such an upward trend. The potential reason for this upward trend was the decline of contact between elderly people and children who sometimes are infected with varicella in a house or community.If routine immunization of varicella raises the prevalence of herpes zoster, then such a decline of contact also raises the prevalence of herpes zoster.In other words, this decline of contact might dominate the effect of initiation of routine immunization for varicella. Though it may be out of our concern, we also see the situation of varicella incidence in the same period.It was shown in Figure 2. Apparently, the curve has Conclusion We examined the effect of routine immunization of varicella on the prevalence of herpes zoster in the first half of the year using the most reliable data source of illness.Although a significant effect of routine immunization of varicella was found, the upward trend started before initiation might dominate the effect.We must monitor a longer period to distinguish the effects of routine immunization of varicella other than an upward trend. T . Sugawara et al.DOI: 10.4236/jbm.2018.61100431 Journal of Biosciences and Medicines patient information.Therefore, ethical issues related to medical institutions and pharmacies are unrelated to this study.The use of NDBEMC data by MK for this study was approved by the Ministry of Health, Labor and Welfare of Japan on July 27, 2016 (Research project: Estimation of the number of patients of infectious diseases). Figure 1 Figure 1 . Figure 1 showed the prevalence of herpes zoster, underscoring the seasonality clearly: it is low in January and February and high in September and October.It also shows an apparently upward trend before introduction of routine immunization of varicella for children.The average of the prevalence was 6.621 per 1000 elderly people.The estimation result shows that the estimated coefficient for the Figure 2 . Figure 2. Incidence rate of varicella in patients per 1000 population, divided by the NDBEMC coverage rate.Note: These numbers represent the total number of herpes zoster patients in NDBEMC, divided by the population multiplied by 1000, and divided by the coverage rate of NDBEMC.Vertical line indicates October, 2014 when introduction of routine immunization for varicella in children.
v3-fos-license
2022-05-10T15:16:41.339Z
2022-05-01T00:00:00.000
248632640
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2073-4425/13/5/831/pdf?version=1651890599", "pdf_hash": "e4fb79f2d33660e51cc763a1fcfd51864ea6b105", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:108", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "sha1": "57df04a7764490cf5db688e9d2b93e7c23b4761c", "year": 2022 }
pes2o/s2orc
USP10 as a Potential Therapeutic Target in Human Cancers Deubiquitination is a major form of post-translational protein modification involved in the regulation of protein homeostasis and various cellular processes. Deubiquitinating enzymes (DUBs), comprising about five subfamily members, are key players in deubiquitination. USP10 is a USP-family DUB featuring the classic USP domain, which performs deubiquitination. Emerging evidence has demonstrated that USP10 is a double-edged sword in human cancers. However, the precise molecular mechanisms underlying its different effects in tumorigenesis remain elusive. A possible reason is dependence on the cell context. In this review, we summarize the downstream substrates and upstream regulators of USP10 as well as its dual role as an oncogene and tumor suppressor in various human cancers. Furthermore, we summarize multiple pharmacological USP10 inhibitors, including small-molecule inhibitors, such as spautin-1, and traditional Chinese medicines. Taken together, the development of specific and efficient USP10 inhibitors based on USP10’s oncogenic role and for different cancer types could be a promising therapeutic strategy. Introduction Dysfunctional protein expression is associated with many diseases including cancer. Proteins, especially in eukaryotic cells, maintain normal cell function under steady-state conditions, 80% of which is mediated by the ubiquitin-proteasome system (UPS) [1]. Ubiquitin is an 8.5 kDa protein containing 76 amino acids that produces a polyubiquitin chain on a target protein, mainly through its seven lysines at the N-terminal and methionine sites. Ubiquitin usually binds to a target protein through glycine at the C-terminal and leads to its degradation through the 26S proteasome system [2,3]. In this process, three kinds of ubiquitin enzymes play a significant role: E1 ubiquitin-activating enzyme, E2 ubiquitin-conjugating enzyme, and E3 ubiquitin ligase. E3 ubiquitin ligase is target-specific and regulates the interaction of ubiquitin with target proteins. More details about the structure and function of UPS have been well summarized elsewhere [4]. Deubiquitination is a process that is the opposite of ubiquitination, and deubiquitination enzymes (DUBs) play an important role in this process. DUBs remove or cleave the isopeptides between the substrate protein and ubiquitin to achieve deubiquitination. This process of reversal had not been widely studied until now [5,6]. Although some researchers have found that DUBs affect a variety of cellular processes, their underlying mechanisms, targets, and inhibitors for specific diseases remain largely unknown [7]. The regulation of DUBs and the abundance of their downstream target proteins is considered a novel and promising cancer-treatment strategy [8]. Although hundreds of DUBs have been identified, the characteristics and functions of many in human diseases remain unclear [9]. USP10 has been widely studied and found to be involved in various cellular processes, including DNA repair, cell-cycle regulation [10], autophagy [11], and immune and inflammatory responses. Especially for inflammation, Li et al. demonstrated that USP10 can elevate proinflammatory factors in endometriosis (EM) and that Cai's Neiyi Prescription (CNYP) can reduce EM-induced inflammation by inhibiting the mRNA and protein expression of USP10 [12]. However, it is known that USP10 maintains normal cell function by controlling the protein balance through the ubiquitin-proteasome degradation pathway [13]. In immune system diseases, such as asthma and systemic sclerosis, USP10 influences inflammatory responses by inhibiting T-box transcription factor (T-bet) ubiquitination and stabilizing its expression, and thus, it has also been found that quercetin, an inhibitor of USP10, alleviates asthma via enhancing the ubiquitination and promoting the degradation of T-bet [14]. Other studies have also demonstrated that USP10 deubiquitinates AMPKα, regulating energy metabolism [15], and stabilizes the CD36 protein, thus promoting the development of atherosclerosis [16]. In recent years, mounting evidence has also indicated that USP10 plays a crucial role in tumorigenesis [9,[17][18][19][20][21][22][23]. The overexpression of USP10 can inhibit the formation of stress granules (SGs), thus restraining the development of a tumor [19,24]. In addition, two well-known tumor suppressors, P53 and SIRT6, can be regulated and considered as substrates of USP10 [25,26]. The p53 protein is a key transcription factor and plays an important role in the DNA-damage response, cell-cycle arrest, and apoptosis, which are closely associated with tumorigenesis. Under normal conditions, the E3 ubiquitin ligase MDM2 maintains a basal low level of p53 by promoting its ubiquitination and 26S proteasomal degradation. In 2020, Yuan et al. revealed that USP10 is a novel regulator of p53. The depletion of USP10 significantly reduced p53's stabilization by increasing its ubiquitination. Moreover, the expression of p53's downstream target genes, including p21 and Bax, was also downregulated [27]. Mechanistically, USP10 can reverse the activity of Mdm2 through deubiquitinating p53 in the cytoplasm, causing the return of p53 from the cytoplasm to the nucleus and affecting the nuclear output of the p53 protein. Moreover, USP10 plays an essential role in stabilizing the p53 protein in neurodegenerative diseases, and several proteins regulate the cell cycle by regulating USP10, affecting the stability of p53. For example, miR-191, a factor promoting the proliferation of pancreatic cancer cells, acts by negatively regulating USP10 and thereby reducing the content of p53 [28]. In thyroid cancer and gastric cancer, DZNep, an essential component of PRC2, also regulates p53 by regulating the content of USP10 [29,30]. Interestingly, a negative-feedback loop between USP10 and p53 was observed by Luo et al., who found that miR-138 could decrease the activity of USP10 through specifically binding to the 3 -UTR of USP10's mRNA following an increase in the expression of the p53 protein, but in turn, p53 reduced the expression of miR-138, forming a feedback pathway [31]. It is worth noting that, in addition to regulating DNA damage via p53 ubiquitination, USP10 also stabilizes MSH2 (an important DNA mismatch-repair protein) and TRAF6 (an activator of NF-κB), which affects DNA repair and other physiological processes during DNA damage [23,25,27,32,33]. In line with this, some studies found that the disruption of the interaction between USP10 and p53 inhibited cancer cells' viability or tumor growth. For example, mycoplasma DnaK particles inhibited the anticancer activity of p53 through binding to USP10 [34]. The inhibition of the proline hydroxylase PHD3 could significantly reduce the binding of USP10 to p53 [35]. Resveratrol promotes the p53-mediated apoptosis of cancer cells by acting on the USP10-binding protein G3BP1 [36]. IGF2BP3 (insulin-like growth factor 2 mRNA binding protein 3) inhibits p53 expression through disrupting the interaction between USP10 and p53 and promotes lung cancer progression [37]. Although some substrates and regulators of USP10 have been reported, the role of USP10 in tumorigenesis remains poorly understood. Herein, we focus on the involvement of USP10, along with its substrates and upstream regulators, in cancer cells. We expect that the summary of such knowledge will help us to develop new inhibitors or strategies to target USP10 in human cancers in the future. USP10 Is a USP-Family Deubiquitinating Ligase The USP family is a relatively large group (with more than 50 known members) of deubiquitinating enzymes [38,39]. As the largest and most diverse family of DUBs, the USPs have many similarities with and differences from other DUBs, such as OTUs (ovarian tumor-related proteases), JAMMs (Jad1/Pad/MPN domain-containing metalloenzymes), UCHs (ubiquitin C-terminal hydrolases), MJDs/Josephin (Machado-Joseph disease proteases), and the ZUP1/ZUFSP (zinc finger with UFM1-specific peptidase) family. All these DUBs can reverse ubiquitination by cleaving the peptide or isopeptide bond and removing ubiquitin from its substrates. However, they can be differentially classified into cysteine proteases and metalloproteases. For example, the JAMMs are known to be metalloproteases, but the OTUs, USPs, MJDs/Josephin, and UCHs are considered cysteine proteases [40]. Interestingly, among the cysteine-protease DUBs, the USPs are larger than the other DUBs [9]. More importantly, they consist of one specific USP domain (also known as the catalytic domain) and three other conserved subdomains, which form a hand-like structure including a thumb, finger, and palm [41]. Different domains or subdomains play important roles in regulating the enzyme activity. All the USPs contain short motifs (two short conserved fragments, a lysine box, and a histidine box) and a finger-like structure containing β-lamellar structures that coordinate and support ubiquitin's ligands, mediating the interaction with the ubiquitin substrate. The thumb is composed of the central catalytic helix, nucleophilic cysteine, and core proteasome structure. The palm contains aspartic acid, histidine residues, and the conserved β-sheet structure of the protease core. Therefore, the thumb-palm gap provides the catalytic center for the USP domain [9,13,15,42]. Recent results from NMR structural studies show that the active site undergoes rearrangements upon Ub binding to a USP. Subsequently, a conformational change in the catalytic domain promotes the catalytic hydrolysis of Ub from the ubiquitinated proteins. The USPs are grouped into five subfamilies based on the ubiquitin domain's architecture: Ub-associated domain (UBA domain) (5 members), Ub-interacting motif (UIM) (3 members), zinc-finger Ub-specific protease domain (ZnF-UBP domain) (10 members) ( The domain structures of the ZnF, UIM, and UBA subfamilies are shown in Figure 1. There is a zinc finger Ub-specific protease domain (ZnF-UBP domain) in ten members (upper), Ub-interacting motif (UIM) in three members (middle), and Ub-associated domain (UBA domain) in five members (lower). However, USP5 and USP13 from the ZnF family, together with USP25 and USP28 from the UIM family, also have UBA domains. The domain structures of the UBL-related subfamily are shown in Figure 2. All the members have the domain of the ubiquitin-specific proteases (USPs) and ubiquitin-like domain (UBL domain). In addition, different members have other additional domains, such as transmembrane and coiled-coil domains. The domain structure of the USP10-related subfamily is shown in Figure 3. All the members have the domain of the ubiquitin-specific proteases (USPs). In addition, USP8 and USP54 also contain the domain of microtubule-interacting and trafficking proteins (MIT) or rhodanese (Rhod). Both USP26 and USP19 include coiled-coil domains. However, USP30 contains a transmembrane domain and CYLD includes the CAP-Gly domain (CAP) and B-Box (B). The human USP10 gene encodes 798 amino acids forming a typical USP domain. The USP10 consists of a larger N-terminus region, a USP catalytic structure domain (about 380 amino acids, starting at 415 amino acids from the N-terminus [43]), and a smaller C-terminus region ( Figure 1) [44]. The USP10 proteins are highly conserved among humans and other mammals. For example, there is about 99% percent identity between the amino acid sequences of the human and rat or mouse forms [9]. Similar to other DUBs, USP10 can deubiquitinate and cleave Ub from the C-termini of substrates through four steps. The first step is the USP binding to the ubiquitin COOH terminus via its USP domain, causing a conformational change in the catalytic domain. Then, the conserved residues of the group (Cys, His, Asp, and Asn) form a catalytic triad in a specific manner. Next, the deprotonated thiol group conducts a nucleophilic attack on the carbonyl carbon, and the active site is transferred from its initial position. Finally, the non-anchoring Ub is removed from the target proteins [9,45]. It is known that the USPs specifically cleave ubiquitin sections or process ubiquitin chains based on the substrates. For example, both USP21 and USP7 cleave K6-linked ubiquitin chains. USP7 can also cleave K11-, K33-, K48-, and K63-linked chains [9,45]. However, whether and how USP10 can also remove K11-, K48-, and K63-linked ubiquitin chains remains largely unknown. Recently, Yuan et al. demonstrated that USP10 stabilized Smad4 via directly binding to Smad4 and removing a Lys48-linked polyubiquitin chain [46]. Another previous study from Hu et al. also showed that the depletion of USP10 increased the K48-linked polyubiquitination of HDAC6 in the non-small-cell lung cancer (NSCLC) cell line H1299. In addition, He et al. discovered that USP10 reduced the K63-linked polyubiquitination of PTEN in the NSCLC A549 cell line [47]. All these modifications affect the protein's stability when targeted by USP10. It has been reported that many other proteins, such as TP53 RPS2, RPS3, RPS10, and LC3B, can be deubiquitinated by USP10 and are known substrates (as summarized in Table 1). Although many of their ubiquitin sites have been identified, the mechanism of ubiquitin recognition by USP10 remains elusive. It will be intriguing to assess how and which ubiquitin linkage could be recognized and cleaved by USP10 in vitro and in vivo. More importantly, the characteristics of the specificity for chains other than K48-and K63-linked ubiquitin of USP10 is also worthy of future investigation. USP10 binds to Yki and promotes Yki deubiquitination and stabilization through the proteasome ubiquitination pathway, while Usp10 may regulate human YAP activity [48]. RPS3 is monoubiquitinated by Hel2p E3 ligase, which is regulated by the interaction between Hel2p and UBP3 [11]. AR USP10 is a cofactor that binds to AR and stimulates the androgen response of the target promoter USP10 regulates the activity of AR. USP10 releases AR in the cytosol and enhances the nuclear entry and transcriptional activity of AR, thus affecting the AR signaling pathway [49]. PCNA USPs regulates the stability of DNA polymerase ETA. p53 USP10 has been identified as a regulator of p53. Under non-stress conditions, USP10 releases p53 in the cytoplasm, thus countering MDM2's action and allowing nuclear re-entry. During DNA damage, USP10 accumulates in the nucleus, is phosphorylated by ATM, and deubiquitinates p53 in the nucleus [27,51]. USP10 activity promotes the LKB1 phosphorylation of AMPKα at Thr172. USP10 stabilizes AMPKα by inhibiting AMPKα ubiquitination in HCC, which results in the inhibition of AKT and mTOR activation [15]. USP10 regulates Notch signaling during angiogenic spouting by interacting with and stabilizing the NOTCH1 intracellular domain (NICD1) in endothelial cells. Notch signaling is important in determining the germinating behavior of endothelial cells [54]. G3BP USP10 regulates deubiquitination activity and membrane transport between endoplasmic reticulum and Golgi apparatus by binding to G3BP. Stress granules (SGs) are dynamic RNA-protein complexes located in the cytoplasm that rapidly form under stress and disperse when normal conditions return. The formation of SGs is dependent on the SH3-domain-binding protein (G3BP) of RAS-Gap. USP10 binds to the G3BP protein to form the USP10-G3BP1 complex, which is required for the deubiquitination of RPS2, RPS3, and RPS10. Thus, the modified 40S subunit is saved from degradation [55][56][57]. USP10 antagonizes the transcriptional activation of the c-Myc oncogene through SIRT6 and TP53, thereby inhibiting cell-cycle progression, cancer cell growth, and tumor formation [25]. PTEN USP10 inhibits the growth and invasion of lung cancer cells by the upregulation of PTEN. SYK USP10 inhibition is a novel approach to inhibiting splenic tyrosine kinase (SYK) and impeding its role in AML pathology, including tumorigenic FFLT3-positive AML. USP10 forms a complex with FLT3-ITD and physically binds to SYK, stabilizing the levels of both proteins by deubiquitinating USP10. USP10 can directly interact with SYK [62]. CFTR USP10 promotes the endocytic cycle of CFTR by deubiquitination USP10 mediated the desorption of cystic fibrosis transmembrane conduction regulator (CFTR) in early endosomes, thus enhancing the endocytosis and recirculation of CFTR. It also directly interacts with CFTR and deubiquitinates CFTR, resulting in increased expression of CFTR on the cell surface [39,64]. Nexin 3 The expression of USP10 increases the SNX3 protein level. USP10 increases SNX3 expression by deubiquitination and reducing proteasomal degradation, both of which promote ENaC's export to the plasma membrane through the secretory pathway [65]. CD36 USP10 regulates CD36 expression and promotes foam cell formation. USP10 directly interacts with Smad4 and reverses the polymerization of the ubiquitin chain linked to proteolytic Lys48, resulting in its stabilization and the activation of TGF-β signaling, which promotes HCC metastasis [46]. αv integrin USP10 is important for myofibroblast development. LC3B USP10 deubiquitinates LC3B and increases the LC3B level and autophagic activity. USP10 transcription induced by c-Myc improved the stability of the p14ARF protein [69]. Regulation of USP10 As for most deubiquitinating enzymes, the activity of USPs can be regulated by multiple mechanisms, including those acting at the transcriptional and post-translational levels. Previous studies showed that a subset of USP-family deubiquitinating enzymes including USP1, USP4, USP8, and USP13 could be phosphorylated by CDK1, AKT, or CLK3 kinase, preventing the USPs and substrates binding [45]. For example, the CDK1mediated serine phosphorylation of USP1 disrupts the interaction between USP1 and UAF1. CLK3 mediated the phosphorylation of USP13 at Y708, promoting its binding to c-Myc, which is an important transcription factor and also known as an oncogene in many cancers [71]. In line with the above studies, USP10's activity can be regulated by phosphorylation at its N-terminus domain. Deng et al. revealed that AMPKα directly mediated the phosphorylation of Ser76 at the USP10 N-terminus and increased its activity. Interestingly, USP10 could regulate the deubiquitination of AMPKα, creating a positivefeedback pathway [15] (Figure 4). Yuan et al. demonstrated that ATM could phosphorylate USP10 at Thr42 and Ser337 upon the DNA-damage response and USP10 was translocated into the nucleus, where the N-terminus of USP10 (amino acids 1-100) binds to p53 and inhibits its ubiquitination [27]. In addition, Luo et al. reported that co-stimulation by BCR and TLR1/2 initiated the AKT-dependent phosphorylation of T674 of USP10; subsequently, USP10 entered the nucleus and stabilized the AID protein (also see Figure 4 and Table 2) [72]. In addition to phosphorylation directly controlling the activation of USP10, a previous study from the Deng group revealed that, in keloid cells, TRAF4 inhibited USP10-mediated p53 deubiquitination and degradation through disrupting the access of p53 to USP10 [73]. Liu et al. found that USP10 stabilized beclin-1 by interacting with and inhibiting its ubiquitination. Very interestingly, they also pointed out that beclin-1 controlled the stability of USP10 by regulating the stability of USP13, which can deubiquitinate USP10 (also see Figure 4) [53]. Another finding regarding feedback regulation was reported by several groups [33,53,74,75]. They found that MCPIP1 directly binds to USP10 and serves as a bridge between USP10 and TANK [33]. MCPIP1 forms a complex with USP10 and TANK, which mediates the deubiquitination of the TRAF6 K63 chain [76] and inhibits NF-κB activation upon DNA damage [33]. On the other hand, genotoxic-stress-induced NF-κB activation enhanced MCPIP1 transcription [75]. In addition, Lin et al. demonstrated that USP10 enhanced the stability of SIRT6 through interacting with its N-terminal regulatory domain and deubiquitinating SIRT6, and then inhibited the transcriptional activity of the c-Myc oncogene through SIRT6 [25]. Consistent with the latest findings, Zhou et al. found that USP13 also affected the ubiquitination of c-Myc and resulted in enhancing its stability [77]. However, feedback regulation between USP10 and c-Myc was also found. c-Myc directly binds to the second E-box sequence of the USP10 gene and activates the transcription of USP10 [69]. All these findings suggest that regulatory feedback mechanisms play an important role in signaling pathway regulation and maintaining a balance in protein levels. They also encourage us to identify many more modifiers of USP10, such as kinases and DUBs, and uncover the mechanisms of USP10's dynamic regulation. This highlights that identifying more regulators of USP10 will be necessary for clarifying its role in disease. An arrowhead from USP10 indicates positive regulation of the substrate protein. Single arrowheads with red lines indicate that USP10 only regulates the substrates, while double arrowheads with red lines indicate that they can regulate each other. For example, USP10 regulate the activity or expression of integrin, AID, ITCH, p53, USP13, and so on. Conversely, p53 and USP13 also can regulate USP10 forming feedback loop. An arrowhead toward USP10 indicates a positive regulation of USP10's deubiquitinating ligase activity, while a blunt head toward USP10 indicates negative regulation of such. As shown, both AKT and ATM increase the activity of USP10. However, overexpression of miR-138, miR-191 and miR-34a-5p inhibits the mRNA and protein expression levels of USP10. A curved arrowhead with a black color indicates positive regulation. In addition to USP10 being regulated by the proteins of host cells, previous studies of the Takahashi group showed that HTLV-1 Tax interacted with amino acids 727-798 at the C-terminal of USP10 and inhibited USP10's deubiquitination activity and function. For example, inhibiting USP10 reduces the production of reactive oxygen species and suppresses apoptosis and the formation of SGs (stress granules) [78]. On the other hand, it has been found that USP10 can be regulated at the transcriptional level. Generally, a microRNA binds to the 3-untranslated region (3 -UTR) of an mRNA in a sequentially specific manner, inducing the mRNA's degradation or inhibiting its translation. Luo et al. revealed that USP10 is a target of miR-138 and that the 3 -UTR conserved domain of USP10 directly interacts with miR-138. The overexpression of miR-138 inhibits the mRNA and protein expression levels of USP10 (also see Figure 4 and Table 2). Moreover, they also found that p53, which can be deubiquitinated by USP10, negatively regulates the expression of miR-138 through binding to the miRNA's promoter region [31]. In other words, there is feedback regulation. Another microRNA, miR-191, is considered an upstream regulator of USP10 in PDAC (pancreatic cancer) cells [28]. All the regulators of USP10 are summarized in Table 2. FOXO4 Overexpressed FOXO4 inhibits USP10 transcription and protein expression by binding to the bases 1771-1776 in the promoter region TSS of USP10. [82] Zhang et al. indicated that miR-34a-5p acted as a negative regulator of USP10, but the underlying mechanism of the microRNA's action remains largely unclear [79]. More importantly, whether there is feedback regulation is also unknown. This highlights that identifying more regulators of USP10 will be essential for clarifying the mechanisms and roles in different diseases, especially for cancer. USP10 in Cancers USP10 plays multiple roles in many diseases including human cancers. It was shown that the overexpression of USP10 promoted the proliferation and metastasis of multiple tumors including adult T-cell leukemia, glioblastoma multiforme, chronic myeloid leukemia, non-small-cell lung cancer, hepatocellular carcinoma (HCC), colon cancers, etc., while it has also been reported that the expression of USP10 was reduced in gastric carcinoma (GC), HCC, and colon cancer and suppressed the development of tumorigenesis through regulating different signaling pathways, such as the p53 apoptosis pathway, suggesting that USP10 is, indeed, a double-edged sword in different cancer types or different cell contexts for the same cancer. We will further discuss the oncogenic and tumor-suppressor roles of USP10 in the following section. USP10 as an Oncogene It was first shown that increased expression of USP10 predicted poor survival in patients with glioblastoma multiforme (GBM) [83]. Mounting evidence has indicated that USP10 plays an oncogenic role in tumorigenesis. In adult T-cell leukemia (AML), previous studies have revealed that the human T-cell leukemia virus type 1 (HTLV-1) oncoprotein Tax interacts with USP10 and promotes ROS-dependent apoptosis and the occurrence of AML. Moreover, USP10 was also found to reduce the sensitivity of AML cells to chemotherapeutic drugs including arsenic [19,78]. Recently, the Weisberg group verified that increased expression of mutant FLT3-ITD correlated with poor prognosis and low survival. Mechanically, USP10 led to the accumulation of FLT3-ITD through deubiquitinating and stabilizing the mutant FLT3-ITD, promoting AML progression [9,61]. In addition to mutant FLT3-ITD, another spleen tyrosine kinase (SYK), a critical regulator of FLT3, was deubiquitinated and stabilized by USP10; depleting or inhibiting USP10 reduced the expression of SYK, which promoted the proliferation of Ba/F3 and MOLM14 cells [68,84]. Chronic myeloid leukemia (CML) is positively associated with abnormally high expression of the tyrosine kinase BCR-ABL. Liao et al. found that both USP10 and SKP2 are highly expressed in the primary monocytes of patients with CML than in the primary monocytes of healthy humans. Increased USP10 led to the deubiquitination and stabilization of SKP2, causing BCR-ABL activation and promoting CML cells' proliferation [85,86]. To date, in addition to blood-related cancers, USP10 also plays oncogenic roles in many solid tumors including hepatocellular carcinoma (HCC) [46], colon cancer [68,84], renal-cell carcinoma (RCC) [27], non-small-cell lung cancer (NSCLC) [27], prostate cancer (PCa) [87,88], esophageal cancer [87,88], breast cancer [75,89], and melanoma [87,88]. In tissue samples from HCC patients, increased USP10 levels are positively correlated with the abundance of YAP/TAZ [90]. Functionally, USP10 directly deubiquitinates and stabilizes YAP/YAZ and promotes HCC progression. Additionally, other substrates of USP10 in HCC have been found, such as Smad4 and TGF-β. Yuan et al. found that USP10 directly bounds to and stabilized Smad4, promoting HCC metastasis, using a functional RNA-interference screening method; thus, USP10 is considered a prognostic and therapeutic target in Smad4-positive HCC metastatic patients [46]. In prostate cancer, high expression of USP10 indicates poor prognosis [18,91]. USP10 promotes the proliferation of PCa cell lines through binding to and increasing the stability of G3BP2, which inhibits p53 activity. On the other hand, androgen receptor (AR), a key regulator, plays a crucial role in the regulation of PCa progression. AR-related proteins including H2A, Zub1, and H2Aub1 can be deubiquitinated and stabilized by USP10. In esophageal cancer and human neuroblastoma, USP10 affects cancer cell proliferation by stabilizing PCNA and NRF-1, respectively [92]. These findings suggest that USP10 could function as an oncogene to precisely control cell proliferation in tumor cells. In addition to the differential effects of USP10 in different cancers, USP10 can be a double-edged sword in the same cancer. For example, in colon cancer, USP10 interacted with and stabilized SIRT6 through inhibiting SIRT6's ubiquitination, suppressing proliferation [25]. However, the USP10-mediated deubiquitination of NLRP7 or MSI2 increased its stabilization and expression, promoting the occurrence of CRC [68,84]. It will be intriguing to assess when and how USP10 could deubiquitinate different proteins in vitro and in vivo. In contrast to the tumor-suppressor role of USP10 in p53-WT cancer cells, USP10 exerts an oncogenic function in p53-mutant cancer cells. For example, in p53-mutant RCC cells, increased USP10 expression promotes cell proliferation via deubiquitinating and stabilizing the mutant p53 [27]. Additionally, in NSCLC, the USP10-mediated deubiquitination of the oncogenic protein histone deacetylase 6 (HDAC6) leads to cisplatin resistance in patients harboring mutant p53 [93]. In addition, USP10 affects NSCLC progression through regulating EIF4G1 in a p53-independent manner [94]. In addition to USP10 controlling cell proliferation, a study from the Ouchida group revealed that USP10 promotes tumor migration or invasion. Epithelial-to-mesenchymal transition (EMT) plays an essential role in the process of tumor invasion. USP10 facilitates tumor migration by stabilizing the protein abundance of the EMT transcription factor Slug [7]. Moreover, USP10 activates the Raf-1/MEK/ERK pathway, which is an important regulator of EMT [60]. Functionally, USP10 stabilizes the ITCH E3 ligase, enhancing the stability of MEK1 and activating the Raf-1/MEK/ERK pathway. Recently, our group reported that ITCH polyubiquitinated and activated BRAF in melanoma cells in response to proinflammatory cytokines, leading to the elevation of MEK/ERK signaling [95]. This further supports the role of USP10 in activating the Raf-1/MEK/ERK pathway. In addition, Spain-1, an inhibitor of USP10, inhibits melanoma growth and improves the anticancer effect of cisplatin by inhibiting USP10 activity [87,88]. However, the underlying mechanism remains largely unknown. USP10 as a Tumor Suppressor To understand the role of USP10 in liver cancer, Lu et al. analyzed 74 pairs of paraffin-embedded tissues of HCC patients and adjacent non-tumor specimens (61 men and 13 women) and found that compared to low levels of USP10, high levels of USP10 predicted longer disease-free survival and overall survival. In addition, USP10's mRNA expression was downregulated in clinical HCC tissue samples compared with adjacent non-tumor samples [96]. Mechanistically, USP10 stabilizes the expression of PTEN and AMPKα through inhibiting the Lys48-linked polyubiquitylation of PTEN and regulating the K63-linked ubiquitin chain of AMPKα; all these inhibit mTOR activation and suppress the proliferation of HCC cell lines [96]. As in HCC, USP10 was found to be downregulated in lung cancer, and the knockdown of USP10 inhibited PTEN ubiquitination and promoted tumor growth and invasion [97]. Recently, Yu et al. demonstrated that the protein and mRNA levels of the tumor suppressor KLF4 were reduced in lung cancer tissues. Moreover, the depletion of KLF4 facilitates the development of lung cancer [98]. Wang et al. further found that USP10 maintains KLF4's stability through deubiquitinating KLF4. Similarly, MSH2 (MutS Homolog 2), another USP10 substrate, was identified in lung cancers by Zhang et al. Their results show that the depletion of USP10 in A549 increased cell survival and decreased apoptosis through destabilizing MSH2 [32]. All these results indicate that in the same cancers, USP10 affects tumor development via regulating different substrates. Growing evidence has shown that USP10 plays important tumor-suppressor roles in different cancers, such as colorectal cancer, non-small-cell lung cancer, small intestinal adenocarcinoma, epithelial ovarian cancer, colorectal cancer, renal-cell carcinoma, breast cancer, and gastric cancer [99][100][101]. In 2020, Bhattacharya summarized the partial roles of USP10 in different cancers [9]. For example, in colon cancer, USP10 interacted with and stabilized SIRT6 through suppressing SIRT6 ubiquitination [25]. Thus, increasing the expression of c-Myc and p53 further inhibits cell-cycle progression, cell growth and tumorigenesis. In addition, it has also been shown that USP10 is silenced by methylation and downregulated in the early stages of colorectal cancer [99]. Taken together, all these findings suggest that USP10 also functions as a tumor suppressor. USP10 has lower expression in GC than in para-cancer tissues. More importantly, low expression of USP10 indicates poor prognosis in GC patients, suggesting that USP10 might be a promising prognostic marker in GC [102]. Interestingly, S100A12 (also named calgranulin C) has been found to be downregulated in GC clinical samples and positively correlated with USP10 expression. Previous studies using RCC (renal-cell carcinoma) tissue microarrays indicated that USP10 expression was reduced in RCC samples compared with normal renal tissues. The reconstitution of USP10 inhibited the colony formation and cell proliferation of the CAKI-1 and CAKI-2 RCC cell lines. However, it is still unknown whether USP10 can deubiquitinate and stabilize these related proteins. Further studies need to be conducted to uncover the underlying mechanisms [102,103]. More recently, Kim et al. demonstrated that USP10 effectively suppressed curcumin-induced paraptosis in malignant breast cancer cells through a mechanism not involving the regulation of beclin-1, p53, or AMPK. All these results indicate that USP10 may also function in a deubiquitinase-independent manner [104]. Given that USP10 may be a tumor suppressor, several upstream regulators of USP10 including miR-191 and DZNep were identified in pancreatic cancer and thyroid cancer cells, separately. Mechanistically, both of them promote cell proliferation via inhibiting the expression of USP10, which further decreases the stability of p53 [29,30]. Overall, all the above studies suggest that USP10 can act as a tumor suppressor via different molecular and cellular mechanisms. Targeting USP10 in Human Cancers Although USP10 is a double-edged sword and plays a dual role in tumorigenesis, it is widely considered a well-known oncogene in specific contexts as described above. In light of the important oncogenic role of USP10 in many cancers, inhibiting the expression or activity of USP10 is expected to be a promising therapeutic strategy for curing a range of human tumors. In line with the notion, many researchers have developed multiple screening methods to identify small-molecule inhibitors for cancer treatment, such as activity-based probes, Ub-7-amino-4-methylcoumarin(AMC), Ub-phospholipase A2 (PLA2), time-resolved fluorescence resonance energy transfer (TR-FRET), SDS-PAGE-Coomassie, and many others [105]. Different methods have different advantages and disadvantages. For example, UB-PLA2 cannot be used to screen inhibitors of the UCH family, but TR-FRET is very sensitive for screening UCH-family inhibitors. However, compared with UB-PLA2, a major problem is the lack of commercial kits available for TR-FRET methods. More details about these methods have been reviewed by Chen et al. [105]. In recent years, several inhibitors have been developed based on the structure or enzyme activity of USPs, such as GW7647 for USP1 [106], Q29 for USP2 [107], PR619 for USP2/4/5/7/8/15/20/28/47 [105], H9X19818 for USP7/10 [108], and VLX1570 for USP14 [109]. Among them, VLX1570 was approved for clinical trials as the first DUB inhibitor [109]. Given VLX1570's potential to promote lung-tissue damage, caused by the accumulation of the drug's metabolites, it has been demonstrated to lead to many adverse effects, especially severe respiratory insufficiency [105,110]. Therefore, screening and developing small-molecule inhibitors specifically targeting the USPs could be a potential and promising strategy for human cancer therapy. Below, we focus on the progress regarding USP10 inhibitors. Spautin-1 In 2011, it was reported for the first time that spautin-1 was a highly potent autophagy inhibitor, as demonstrated using imaging-based screening. Spautin-1 inhibits autophagy by reducing the activity of USP10 and USP13, which promotes a major aspect of autophagy: degradation by Vps34 complexes. In addition, spautin-1 also downregulated the expression of USP10 and USP13 at the protein level [53]. Given the role of spautin-1 in autophagy, spautin-1 can improve the therapeutic efficacy of IM for CML patients. Mechanically, spautin-1 significantly inhibits imatinib mesylate (IM)-induced autophagy in CML cells by downregulating beclin-1 and enhances IM-induced apoptosis by inactivating PI3K/AKT and GSK-3β [111]. CFTR plays an important role in cystic fibrosis (CF) [112]. Although it has been shown that USP10 deubiquitinates and degrades CFTR [64], Pesce et al. found that spautin-1 could not affect the expression of CFTR, indicating that spautin-1 affected CF progression in a USP10-independent manner. However, the underlying mechanism is largely unknown. Interestingly, spautin-1 was recently shown to suppress cell growth or kill cancer cell lines in an autophagy-independent manner. A report from the Yang group revealed that spatutin-1 promoted immunogenic cell death (ICD) through the activation of the JUN transcription factor in response to mitochondrial oxidative injury, ultimately resulting in the upregulation of many cytokines including CXCL10 (C-X-C motif chemokine ligand 10) and IL-6 (interleukin-6) [113]. In addition, the inhibition of USP10 by spautin-1 significantly suppressed the migration and metastasis of HCC. Further evidence revealed that USP10 directly binds to Smad4 and stabilizes Smad4 through deubiquitination [46]. Moreover, spautin-1 suppresses the survival of ovarian cancer, prostate cancer, melanoma, and NSCLC cells in a USP10-independent manner [114]. More recently, spautin-A41, an analog of spautin-1, was developed by Elsocht et al. They found that both the autophagyinhibiting effect and induction of microsomal stability were much better than those of spautin-1 [115]. However, there is still no evidence showing that spautin-1 or spautin-A41 directly binds to USP10. Further investigation is obviously warranted to determine whether spautin-1 and spautin-A41 could be used in clinical trials for cancer therapy in vivo. P22077 and HBX19818 (or Analogs) Similar to spautin-1, another two DUB inhibitors were found in 2001. Altun et al. found that PR-619 and P22077 inhibited DUB activity using an activity-based chemical proteomics screening method. They further validated that PR-619 inhibited a broad range of DUBs, but P22077 only targeted USP7. Almost simultaneously, another USP7 inhibitor, HBX19818, and its analogs were developed using biochemical assays and an activity-based protein-profiling strategy. It has been demonstrated that P22077 inhibits the proliferation of neuroblastoma, colon cancer, ovarian cancer, and lung cancer cells via different mechanisms including the induction of p53-mediated apoptosis [116][117][118]. Recently, P22077 was reported to inhibit the growth and metastasis of melanoma by activating the ATM/ATR signaling pathway [119]. Notably, P22077, HBX19818, and HBX19818 analogs including Compounds 3, 7, and 9 also inhibit USP10 in mutant-FLT3-expressing cells. Among the HBX19818 analogs, Compounds 3 and 9 are more specific for USP10 but not USP7. Furthermore, Compound 3 also has a stronger inhibitory activity at low concentrations than the others. In addition, the results indicate that the IC50 of HBX19818 for USP10 is 14 µM and that of P22077 is 6 µM in these cells. Additionally, the inhibitory activity is lower than that for USP7. Importantly, both of them can directly bind to and inhibit USP10. As expected, P22077, HBX19818, and its analogs inhibited the growth of acute myeloid leukemia harboring an FLT3 mutation [61]. On the other hand, it has been reported that USP10 is closely related to the DNA-damage response. It would be intriguing to explore whether a synergistic therapy combining USP10 and an effector of DNA damage would be a more efficient strategy for cancer treatment. Wu-5 Wu-5, a novel USP10 inhibitor, was found by screening an in-house compound library, and it was shown to overcome FLT3-inhibitor resistance and enhance the anti-AML effect of crenolanib. Mechanistically, Wu-5 inhibits USP10 activity through interacting with USP10 in cells. Subsequently, it reduces the expression of the downstream effector AMPKα, suppressing the growth of MV4-11 cells [120]. However, further in-depth research is required to determine whether Wu-5 has broad-range effects and specificity in human cancer cells and clinical efficacy. Quercetin Quercetin (C15H10O7) is a pentahydroxyflavone, widely present in many fruits and vegetables. It exerts many effects combating different diseases, such as cancer, immunity diseases, and cardiovascular diseases. Dysregulation of the T-box transcriptional factor T-bet can cause many immune-mediated diseases [121,122]. A previous study by Pan et al. revealed that quercetin reduced the expression of USP10, which interacts with and maintains the level of T-bet, resulting in T-bet downregulation [14]. Therefore, quercetin was considered a T-bet inhibitor. Many studies have demonstrated that quercetin inhibits the growth of many cancer cells including those of breast cancer, colon cancer, lung cancer, and other cancers [122]. Although several clinical trials have been published, there is still no direct clinical evidence showing that it has any therapeutic effects in human cancers. Therefore, it is necessary to further clarify the role of quercetin and explore whether the inhibition of USP10 is the major mechanism for cancer treatment. It will also be interesting to screen many more flavonoids and develop quercetin derivatives targeting USP10 for anticancer therapy. Traditional Chinese Medicine Traditional Chinese Medicine plays a crucial role in the treatment of many diseases including cancer. In 2009, a medicine called Cai's Neiyi Prescription (CNYP) was invented by Cai; it inhibits inflammation via inducing the apoptosis of endometrial stromal cells. Furthermore, they found that CNYP reduces USP10's mRNA and protein expression. Altogether, it has been demonstrated that CNYP is an inhibitor of USP10, indicating that CNYP is a promising anticancer drug [12]. Given that Traditional Chinese Medicine has relatively few adverse effects, it will also be intriguing to test whether CNYP inhibits cancer cells' proliferation or metastasis in vitro and in vivo, with the potential for its earlier use in patients. UbV.10.1 Different from other USP10 inhibitors, UbV.10.1 is a mixture of proteins or peptides that have high affinity for USP10 and inhibit its activity. It was identified by screening a phage-displayed ubiquitin variant (UbV) library and considered an inhibitor of endogenous USP10 in cells. The overexpression of UbV.10.1 facilitates p53's export from the nucleus to the cytoplasm and degradation through the inhibition of USP10 [123]. However, it is still unknown whether it can inhibit cancer cells' proliferation or metastasis, etc. Additional in-depth investigation is necessary before using this inhibitor for clinical trials. DZNep 3-deazaneplanocin A (DZNep) was developed based on the activity of S-adenosylhomocysteine hydrolase. It exerts anticancer effects in many cancers including blood and solid cancers through inhibiting EZH2 or other methyltransferase activity [124]. A previous report revealed that DZNEP suppressed the proliferation of TP53-wild-type cells but not TP53mutant-type cells. One reason is that it can activate the p53 pathway by increasing USP10 expression, but it is more toxic to the majority of cancer cell lines; it has not been approved by the Food and Drug Administration (FDA). In 2020, another EZH2 inhibitor, tazemetostat, was approved by the FDA for metastatic epithelioid sarcoma treatment [125]. It is encouraging us to explore whether tazemetostat or other EZH2 inhibitors cure cancers in an USP10-independent manner. It also very interesting to test whether tazemetostat will be more efficient together with other pathway drugs such as P53, etc. Conclusions and Perspectives Since its discovery in 2001, USP10's roles in regulating diverse cellular processes and different diseases, especially cancers, have been widely investigated [9,51]. In brief, USP10 is a double-edged sword in affecting human tumorigenesis due to the complex cellular contexts. Therefore, given the oncogenic role of USP10, the development of more specific USP10 inhibitors could be a potential and promising strategy for cancer treatment. To date, only Compounds 3 and 9 of the HBX19818 analogs are more specific for USP10, while other inhibitors, including spautin-1 (the first to be developed for USP10), are not specific for USP10 and have at least two targets [61,105]. Bearing this notion in mind, it is necessary and urgent to screen or develop much more specific USP10 inhibitors. In addition to directly targeting USP10, alternative approaches also need to be considered, such as the inhibition of USP10 by regulating its upstream regulators, including USP13, estrogen, and microRNAs (as summarized in Table 2). On the other hand, due to their nature and low side effects, using natural agents such as quercetin and Traditional Chinese Medicine as USP10 inhibitors could be a safe and effective strategy for cancer therapy. Although some upstream regulators and substrates of USP10 have been identified, in-depth investigations of both the underlying mechanisms of USP10-mediated tumorigenesis and the development of corresponding drugs are urgently needed. Mounting evidence indicates that enzymes also exert multiple functions in an enzymatic-activity-independent fashion, such as EZH2. Therefore, notably, we also need to pay more attention to exploring the additional role of USP10 outside its deubiquitinase activity. In light of the double-sided role of USP10 in tumorigenesis, it would be fascinating to explore the physiological role of USP10 in the progression of different tumors by using conditional knockout (KO) or knock-in mouse models. Looking forward, cancer-type-specific USP10 animal models will accelerate and improve the development of specific USP10 inhibitors. We believe that inhibitors of USP10 with the best specificity and efficacy will be developed and used in clinical trials for the therapy of different cancers in the near future.
v3-fos-license
2019-03-06T14:17:02.815Z
2004-07-01T00:00:00.000
52820604
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "pd", "oa_status": "GOLD", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.112-1247400", "pdf_hash": "8d87164839ba14af5257cf7bdb9176ffc922f5fb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:111", "s2fieldsofstudy": [ "Medicine" ], "sha1": "18c6048a161f1afed6729355b1217ed8f414da6c", "year": 2004 }
pes2o/s2orc
Lead in Mexican Children: Pottery Use Slows Reductions in Blood Getting the lead out of Mexico City gasoline has contributed to a significant drop in the blood lead of local children, as it has elsewhere in the world, according to what is likely the first long-term study of such effects in a single group of people [EHP 112:1110–1115]. But the drop within children wasn’t nearly as large as the drop in the air might have suggested. Other lead sources such as ceramic pottery and local industry, combined with poor nutrition, likely are keeping blood lead concentrations elevated at levels 3–4 times higher than those found in U.S. children, concludes a team of researchers led by Lourdes Schnaas of the Mexican National Institute of Perinatology. The team began its study by recruiting 502 pregnant women attending the institute’s prenatal clinic in Mexico City. The researchers followed 321 healthy children born to these women between 1987 and 1992. Team members followed each child for 10 years, taking blood samples every six months. They also tracked airborne lead concentrations using government data. The study period coincided with government actions that led to sharp drops in lead in gasoline, with total elimination by September 1997. Those moves helped slash mean yearly airborne lead concentrations from 2.8 micrograms per cubic meter (μg/m3) in 1987 to 0.07 μg/m3 in 2002. The children’s mean blood lead dropped concurrently. In the group of children born earliest in the study (while most gasoline was still heavily leaded), 89% exceeded the present Mexican action limit for child blood lead of 10 μg per deciliter (dL) at age 2, whereas in the latest-born group only 26% exceeded the limit. For the 100 children with complete data, the peak average of 10.5 μg/dL at age 2 dropped to 4.9 μg/dL at age 10. The children without complete data had somewhat higher concentrations, but saw a parallel drop. Although significant, none of these drops were nearly as dramatic as the drop in airborne concentrations. The team speculates that poor nutrition—as indicated in other Mexico City studies showing low intake of key nutrients such as iron—may have contributed to higher blood lead in the children. Low intake of essential nutrients including iron, calcium, zinc, potassium, and copper has been shown in numerous studies to be associated with increased absorption of lead. The team found that children living in residential and mixed-use sectors of the metropolitan area had blood lead concentrations about 11% and 7% lower, respectively, than children in the more industrial northeastern area. In addition, socioeconomic differences showed a strong influence. Children in the lowest socioeconomic group had blood lead concentrations 32% higher than the highest group. The team also found that children in families that used lead-glazed ceramics had blood lead concentrations 18.5% higher than children in families that didn’t. One-third to one-half of the children’s families used lead-glazed pottery, depending on socioeconomic stratum, with the greatest use among poorer families. A strategy of educating parents about the tainted pottery during the course of this study did not help much; families still used the pottery on occasion, and children could use similar pottery at other family members’ homes. The problem of lead leaching from certain ceramic glazes has been recognized for more than a century, and in 1993 Mexican officials passed regulations cutting the lead content in pottery. But the businesses that make and sell such pottery are poorly monitored, and many are small family enterprises with no quality control. Today, lead-glazed pottery remains one of the greatest sources of lead exposure for Mexicans. Thus, conclude the researchers, eliminating tainted dishes and pots through better regulation of the ceramics industry is needed to further reduce lead body burden. Introduction Aging is characterized by widespread degenerative changes in tissue morphology and function that result in a progressive diminishment of the ability to reproduce and survive. An experimental approach that has led to important insights into causes of these degenerative changes is the identification of factors that influence lifespan of model organisms such as yeast, nematodes, and flies. For example, the lifespan of the free-living soil nematode C. elegans can be significantly extended by cold temperature [1], by mutations that affect mitochondrial function and insulin/insulin-like growth factor (IGF) signaling [2][3][4], and by drugs such as anticonvulsant medicines [5]. Analyses of how these factors influence lifespan have clarified the causes of age-related declines of systems that are necessary for life support. Because lifespan is the focus of many aging studies, age-related declines of systems that do not contribute to life support have been characterized less extensively. The analysis of factors that delay aging of systems that do not contribute to life support can elucidate the causes of aging for these systems and define relationships between the degenerative changes that occur in different systems. The age-related decline of reproductive function, which we refer to as reproductive senescence or reproductive aging, is important for several reasons. Reproductive aging is likely to play a critical role in evolution. The reproductive history of an individual involves (1) the initiation of progeny production as a result of sexual maturity, (2) some level of progeny production during the fertile period, and (3) the cessation of progeny production as a result of reproductive aging or death. These three processes determine the rate of progeny production and the total number of progeny generated by an individual. Because successful reproduction is the ultimate purpose of animal life, it is likely that all three processes are sculpted by natural selection during evolution. Thus, understanding reproductive aging is likely to provide critical insights into the evolution of aging. If reproductive aging were delayed, then an animal would be predicted to generate more progeny. Because aging in general and reproductive aging in particular appear to diminish progeny production, these traits have been a subject of debate for evolutionary theorists. Some of the first notable theories proposed that it was advantageous for the group if older, ''worn out'' individuals died, as exemplified by the writings of Weismann [6]. These theories have been questioned because they rely on group selection, not individual selection [7]. A major criticism is that ''cheaters,'' mutants that display delayed aging and thereby generate more progeny, will have a selective advantage. The modern theory of the evolution of aging was proposed by Medawar and postulates that extrinsic mortality is the cause of the evolution of aging, because it results in an age-related decline in the force of natural selection [8]. This theory proposes that natural selection cannot favor traits that extend longevity beyond the time when most individuals have died as a result of extrinsic mortality. Williams' elaboration of this basic idea, the antagonistic pleiotropy theory, postulates that traits that promote early reproduction at the cost of diminishing late reproduction will be favored by natural selection [9]. This theory proposes that tradeoffs exist between early and late reproduction, and selection for enhanced early reproduction is the cause of aging. An assumption of the theories of Medawar and Williams is that aging confers a selective disadvantage because it decreases progeny production. Medawar wrote concerning a mutation that causes degeneration, ''If differences in its age of onset are indeed genetically determined, then natural selection must so act as to postpone it, for those in whom the age of onset is relatively late will, on the average, have had a larger number of children than those afflicted by it relatively early, and so will have propagated more widely whatever hereditary factors are responsible for the delay'' [8]. Williams wrote, ''I shall assume initially, therefore, that senescence is an unfavorable character, and that its development is opposed by selection'' [9]. The assumption that continued progeny production confers a selective advantage, and therefore reproductive aging confers a selective disadvantage, is a critical aspect of these theories. An alternative assumption is that progeny production is advantageous initially but at some point continued progeny production confers a selective disadvantage; based on this assumption, reproductive aging confers a selective advantage because it halts progeny production. This assumption would be the basis for different theories for the evolution of aging. Medawar and Williams did not perform experimental tests of these theories, but they described testable predictions of these theories. A key prediction of the Medawar theory is that high levels of extrinsic mortality will correlate with a shorter lifespan, whereas low levels of extrinsic mortality will correlate with a longer lifespan. Reznick and colleagues explored this prediction by analyzing guppies from Trinidad [10]. Populations of guppies that live in downstream areas of rivers experience high rates of extrinsic mortality, since a predatory fish is present. By contrast, populations of guppies that live upstream are isolated from this predator and experience low rates of extrinsic mortality. Guppies from these populations were cultured in the lab to determine lifespan and reproductive characteristics. Guppies that evolved in conditions of high extrinsic mortality displayed an extended life span compared to guppies that evolved in conditions of low extrinsic mortality [10]. This is a surprising finding, since it does not appear to be consistent with the prediction of the theory of Medawar. In addition to an extended lifespan, guppies that evolved in conditions of high extrinsic mortality displayed an earlier onset of reproduction, a higher rate of progeny production, and delayed reproductive aging, resulting in the generation of significantly more progeny [10]. Thus, guppies exposed to high predation evolve delayed reproductive aging, an adaptation that enables greater progeny production. These findings raise the possibility that in a habitat with high predation, natural selection favors guppies that generate a larger number of progeny, which may compensate for progeny lost to predation. One mechanism to increase progeny production is to delay reproductive aging, and guppies with delayed reproductive aging may be favored by natural selection in the habitat with high predation. By contrast, in a habitat with low predation, natural selection may favor guppies that generate a smaller number of progeny, since fewer progeny are killed by predation. In this habitat guppies with an earlier onset of reproductive aging may have a selective advantage. A key prediction of the antagonistic pleiotropy theory is that genetic variants with increased late reproduction should not display normal levels of early reproduction but rather should display decreased early reproduction. Williams wrote, ''Successful selection for increased longevity should result in decreased vigor in youth'' [9]. Because this prediction of the theory can be tested experimentally, it has become important to analyze the relationships between early and late progeny production. One approach to testing this prediction has been selecting genetic variants of Drosophila with increased late reproduction. These experiments have not always yielded the same results, but in several cases strains selected for enhanced late reproduction displayed reduced early reproduction [11][12][13]. While the identification of strains with reduced early progeny production and enhanced late progeny production is consistent with the theory, it does not demonstrate that the theory is correct, since these findings may also be consistent with other theories. By contrast, the identification of strains with normal early progeny production and enhanced late progeny production would be inconsistent with the theory of antagonistic pleiotropy. Author Summary In animals, aging is characterized by degenerative changes that progressively diminish the function of tissues and organs. Degenerative changes in life support systems eventually cause death, whereas degenerative changes in reproductive systems eventually cause the cessation of progeny production. Successful reproduction is the ultimate purpose of animal life, and therefore it is important to determine the causes of reproductive aging and the way reproductive aging has been sculpted by natural selection during evolution. Because most aging studies focus on somatic degeneration and lifespan, relatively little is known about the causes of reproductive aging. To identify and characterize factors that influence reproductive aging, we used the nematode worm C. elegans, which is a prominent model for studies of somatic aging. Our results indicate that reproductive aging in worms can be delayed by cold temperature, by restricting nutrient uptake, by diminishing insulin signaling, and by an anticonvulsant medicine that acts on the nervous system. These studies identify genetic pathways and environmental factors that influence reproductive aging. Surprisingly, reproductive aging was not influenced by progeny production early in the reproductive period, indicating that using the germ line to produce progeny does not accelerate degenerative changes. These results suggest that reproductive aging is not caused by use-dependent mechanisms. In addition to its role in evolution, progeny production and reproductive aging have important relationships to somatic aging. One relationship that has been examined extensively is the impact of progeny production on somatic aging [14]. It has been proposed that resources allocated for progeny production are not available for somatic maintenance, and progeny production thereby accelerates somatic aging. Alternatively, progeny production may inflict somatic damage, thereby shortening lifespan. Consistent with these theories, dietary restriction can reduce progeny production and extend the lifespan of C. elegans self-fertile hermaphrodites [1], Drosophila [15], and rodents [16][17][18]. Furthermore, some single gene mutations that extend lifespan also reduce progeny production in C. elegans self-fertile hermaphrodites [2,3], Drosophila [19], and rodents [20]. In C. elegans, ablation of the germ cells extends lifespan if the somatic gonad is intact, but not if the somatic gonad is also ablated [21]. In a group of human females, a small number of progeny positively correlates with extended longevity [22]. However, the correlation between reduced progeny number and extended lifespan is not always observed. Among wild-type Drosophila, there is generally a positive correlation between progeny production and longevity [23]. Furthermore, there are single gene mutations that extend Drosophila or self-fertile hermaphrodite C. elegans lifespan but do not reduce progeny production [24][25][26][27]. It is possible that these mutations reduce progeny production in specific culture conditions that have yet to be determined [28,29]. A second relationship is the impact of somatic aging on reproductive function. Reproductive aging may occur primarily because of degenerative change in the germ line. However, reproduction is an integrated function that requires the soma, and reproduction may decline, at least in part, because of declines in somatic function. In rodents, ovary transplantation experiments indicate that age-related changes in estrous cycling arise from both ovarian and somatic impairments [30,31]. A third relationship is the extent to which common mechanisms influence the processes of degenerative change in somatic and reproductive tissues. In principle, the same factors might influence somatic aging and reproductive aging, or separate factors might control these events. Studies of reproductive aging have been conducted in a variety of animals. In humans, females display an age-related decrease in the number of oocytes in the ovary [32]. Human females typically experience menopause, the complete cessation of progeny production, in the fifth decade of life. Female rodents also display an age-related cessation of cycling and progeny production. In rodents, dietary restriction inhibits reproductive cycling, but interestingly when full feeding is resumed in mid-life, the cessation of reproductive cycling and progeny production is delayed [16][17][18]. In fruit flies, several factors have been demonstrated to influence reproductive aging. Reproductive aging can be delayed by dietary restriction [33,34] and some single gene mutations [35,36]. In males, reproductive activity can accelerate reproductive aging [37]. Studies of reproductive aging of C. elegans have focused primarily on self-fertile hermaphrodites. Wild-type hermaphrodites first produce approximately 300 self-sperm and then produce eggs [38]. Self-fertile hermaphrodites efficiently use these self-sperm and deposit about 250 self-fertilized progeny. When the supply of self-sperm is exhausted, hermaph-rodites deposit unfertilized oocytes and then they cease to deposit oocytes [38]. Hermaphrodites that are mated to males use male sperm in preference to self-sperm and can produce a larger number of progeny, indicating that the number of self-fertile progeny that can be generated is limited by the number of self-sperm that are produced [39]. Reproductive aging in self-fertile hermaphrodites has been measured using two basic approaches. The first is analyzing age-related changes in the morphology of the germ line. Garigan et al. [40] used Nomarski optics to demonstrate that older, self-fertile hermaphrodites display more widely spaced nuclei in the mitotic germ line and nucleoplasm that is disrupted by cavities and grainy material. Older gonads frequently appeared to be shriveled, containing relatively few nuclei and cellularized nuclei. These changes began to be apparent at the fifth day of adulthood and increased with age. An insulin/IGF-signaling pathway regulates life span of C. elegans and several other animals [41]. Loss-of-function mutations in genes encoding DAF-2, a receptor tyrosine kinase, and AGE-1, a component of the PI-3 kinase, increase life span [2,3]. A major target of this pathway is the forkhead transcription factor DAF-16, and a daf-16(lf) mutation decreases life span and suppresses the life span extension caused by age-1 and daf-2 mutations [3]. Garigan et al. [40] showed that age-related changes in germ cell morphology in self-fertile hermaphrodites were accelerated in daf-16(lf) mutants and delayed in daf-2(lf) mutants. The second approach to measuring reproductive aging is analyzing the schedule of progeny production. The typical summary statistics include the duration of the progeny production period and the number of progeny generated at the end of the reproductive period. Several studies have examined how mutations that affect insulin/IGF signaling influence progeny production of self-fertile hermaphrodites. Some daf-2 mutations cause sterility of self-fertile hermaphrodites, indicating that daf-2 is necessary for reproductive development. The site of action of daf-2 was investigated by genetic mosaic analysis, and daf-2 is required cell nonautonomously to control reproductive development [42]. daf-2 activity influences the time of initiation of progeny production; reducing daf-2 function with RNA interference in self-fertile hermaphrodites delays the onset of progeny production, and time of administration studies suggest that daf-2 may function late in development to affect the onset of reproduction [43]. The effects of insulin/IGF signaling on reproductive aging have been analyzed in self-fertile hermaphrodites. Huang et al. [44] reported that the self-fertile reproductive period of daf-2, age-1, and daf-16 mutants is not significantly different from wild-type hermaphrodites. Larsen et al. [24] showed that for self-fertile daf-2 hermaphrodites with extended life spans, reproduction is neither delayed nor prolonged, except for the observation that daf-2(e1370) mutants intermittently produced a small number of progeny late in life. Gems et al. [26] showed that for self-fertile hermaphrodites, late progeny were produced by multiple strains with class 2 daf-2 alleles, but not by long-lived strains with class 1 daf-2 alleles. Late progeny production in selffertile daf-2 hermaphrodites was also reported by Tissenbaum and Ruvkun [25]. The period of self-fertile reproduction is extended by a mutation of tph-1 that reduces serotonin production and likely acts upstream of daf-16 [45]. Late progeny production or an extended reproductive span in a mutant self-fertile hermaphrodite compared to a wild-type self-fertile hermaphrodite demonstrates that the mutation delays sperm depletion, since self-sperm depletion causes reproductive cessation in wild-type self-fertile hermaphrodites. The mutation might also delay the age-related decline of reproductive capacity. However, to demonstrate such a delay, it is important that both the experimental and control hermaphrodites have a sufficient supply of sperm. Factors that can delay reproductive aging of C. elegans hermaphrodites that are not sperm limited have not been reported. Here, we demonstrate that reproductive aging occurs in mated C. elegans hermaphrodites that are not sperm limited, establishing the utility of this model system for studies of reproductive aging. We identified four factors that can delay reproductive aging of mated hermaphrodites raised at 20 8C with abundant food: cold temperature, dietary restriction, a mutation that reduces the activity of the insulin/IGF-signaling pathway, and an anticonvulsant medicine that likely affects neural activity. By manipulating sperm availability, we demonstrated that early progeny production neither accelerated nor delayed the age-related decline of progeny production. These studies identify novel factors that control reproductive aging and have implications for the evolution of reproductive aging. Measurements of Reproductive Aging in Mated C. elegans Hermaphrodites To measure reproduction, we typically placed a single fourth larval stage (L4) hermaphrodite on a petri dish with abundant food, transferred the animal to a fresh dish daily, and scored the number of live progeny produced daily. Wildtype hermaphrodites cultured at 20 8C begin to deposit fertilized eggs about 12 h after the L4 stage. This method provides accurate and precise measurements of daily progeny production. An examination of these data revealed that there is a vertex or peak of progeny production that separates a period of increasing reproductive function from a period of declining reproductive function ( Figure 1). We used the L4 stage, the vertex, and the cessation of progeny production to define periods of reproductive function: the reproductive growth span is the phase with an increasing rate of progeny production and is defined as the time from L4 until the vertex, and the total reproductive span is defined as the time from L4 to the cessation of progeny production. The increasing progeny production that occurs during the reproductive growth span indicates that there is a developmental program that enhances reproductive function and culminates in a peak of progeny production. The declining progeny production that occurs next indicates that the reproductive system undergoes age-related degeneration. These data were analyzed to yield quantitative measurements of early progeny production and late progeny production for each individual. Because the vertex of progeny production typically occurs early in the total reproductive span, this value is a useful measure of early progeny production. To quantitatively measure early reproduction, we estimated the time of the vertex and the level of progeny production at the vertex (see Materials and Methods; Table 1). To quantitatively measure reproductive aging, we used two approaches. First, the total reproductive span was used to determine the duration of the reproductive period. Second, the number of progeny generated at the end of the reproductive period was determined. (Figure 2; Table 2). If a factor delays reproductive aging, then it is expected to extend the total reproductive span and increase the number of progeny generated at the end of the reproductive period. Hermaphrodites produce approximately 300 self-sperm at the start of gametogenesis before switching to oocyte production [38,46]. The number of progeny generated by self-fertile hermaphrodites appears to be limited by the number of self-sperm. First, the number of sperm corresponds closely to the number of self-progeny generated, indicting that nearly every sperm is utilized to produce a zygote [38]. By contrast, hermaphrodites produce oocytes in excess, and unfertilized oocytes begin to be laid as sperm is depleted. Second, in hermaphrodites that are mated to males, the male sperm is used in preference to the self-sperm, and mated hermaphrodites can produce substantially more than 300 progeny [39]. These observations demonstrate that hermaphrodites have reproductive capacity that is not utilized when they are limited to approximately 300 self-sperm but can be utilized when they are provided sufficient male sperm. We reasoned that for live progeny production to be an accurate measure of reproductive capacity at the end of the reproductive period, it is critical that hermaphrodites have abundant sperm continually, so that they do not cease live progeny production as a result of sperm limitation. To develop a method for providing sufficient sperm to hermaphrodites, we cultured hermaphrodites with males for different intervals during the reproductive period. Hermaphrodites cultured continually with males displayed a shortened lifespan and frequently died during the reproductive period (unpublished data), consistent with the findings of Gems and Riddle [47]. These observations suggest that continual exposure to males results in traumatic injury to hermaphrodites, probably as a result of repeated mating, and this trauma can lead to premature death. To reduce traumatic injury, we determined the minimum period of time that a hermaphrodite could be exposed to males and still receive sufficient male sperm for the duration of the lifespan. Hermaphrodites were mated to males for different periods of time, and the sex ratio of progeny at the end of the reproductive period was monitored. Self-fertile hermaphrodites generate almost all hermaphrodite progeny, whereas mated hermaphrodites generate half male and half hermaphrodite progeny. Hermaphrodites mated to males for 24-48 h beginning at the L4 stage frequently produced both male and hermaphrodite progeny until the cessation of reproduction, indicating that these hermaphrodites never exhaust the supply of male sperm. Occasionally, mated hermaphrodites produced male progeny briefly and then produced only hermaphrodite progeny. These hermaphrodites probably received a small supply of male sperm that was used initially, and then they reverted to the use of self-sperm. To insure that the hermaphrodites that were included in the data analysis had an abundant supply of male sperm, we monitored the sex ratio of the progeny and only analyzed mated hermaphrodites that continued to produce male progeny until reproductive cessation. Hermaphrodites mated on days 1 and 2 might acquire sufficient sperm initially, but the sperm might become inviable over time, leading to a decrease in progeny Table 1. Hermaphrodites were mated for days 1 and 2 to three wild-type (WT) males, except data labeled Self. Studies were conducted at 20 8C, except data labeled 15 8C and 25 8C. The mutant alleles were isp-1(qm150), clk-1(qm30), eat-2(ad465), daf-2(e1370), and daf-16(mu86). Wild-type hermaphrodites were exposed to 2 mg/ml ethosuximide (þETH) from conception until death (I and J). doi:10.1371/journal.pgen.0030025.g001 production. To address this issue, we compared hermaphrodites mated on day 1 to hermaphrodites mated late in life. If male sperm becomes inviable over time, then hermaphrodites mated day 1 might produce fewer progeny late in life than hermaphrodites that are mated late in life and receive a supply of fresh sperm. By contrast, if male sperm remains viable for the duration of the reproductive period, then hermaphrodites mated day 1 and hermaphrodites mated late are predicted to generate similar numbers of progeny late in life. Figure 2D shows that hermaphrodites mated on days 3, 5, 6, 7, or 8 usually had a similar number of progeny late in life compared to hermaphrodites mated on day 1; specifically, in five comparisons the values were not significantly different, in two comparisons the late mated value was slightly higher, and in two other comparisons the late mated value was slightly lower. These observations indicate that male sperm received by a hermaphrodite on day 1 does not become inviable during the reproductive period. Self-fertile, wild-type hermaphrodites cultured at 20 8C had an average of 263 total progeny (the range was 87-354) ( Figure 1A and 1B; Table 1). These values are consistent with efficient utilization of approximately 300 self-sperm [38,46]. The vertex of progeny production was 110 progeny/day and occurred at 2.3 d. The total reproductive span was 6.3 d. Compared to self-fertile hermaphrodites, wild-type hermaphrodites mated on days 1 and 2 had a greater total progeny number of 434 (the range was 139-690), a 65% increase ( Figure 1A and 1B; Table 1). The vertex of progeny production was similar, 115 progeny/day at 2.1 d. The total reproductive span was increased to 8.2 d, a 30% increase. These findings indicate that early progeny production is similar in self-fertile and mated hermaphrodites, whereas late progeny production is increased in mated hermaphrodites. These results suggest two important conclusions. First, measuring progeny production underestimates the reproductive capacity of self-fertile hermaphrodites at the end of the reproductive period; hermaphrodites must be mated to avoid sperm depletion and to use progeny production as an accurate measure of reproductive aging. Second, age-related declines of progeny production occur in mated hermaphrodites that have sufficient sperm. Therefore, reproductive aging occurs in mated C. elegans hermaphrodites and is caused by factors other than sperm depletion. Genetic, Environmental, and Pharmacological Factors That Influence Reproductive Aging To characterize mechanisms that influence reproductive aging, we analyzed manipulations that affect life span including mutations, a drug, and temperature. These treatments can be classified by their effects on early reproduction (normal or decreased) and late reproduction (normal, decreased, or increased) ( Figure 2E). Three treatments decreased early reproduction and increased late reproduction. One treatment was cold temperature. The life span of poikilotherm animals like C. elegans is increased by cold temperature [1]. The reproductive span of self-fertile hermaphrodites is extended by cold temperature, indicating that this treatment delays sperm depletion [1,44]. Hermaphrodites mated early and cultured at 15 8C took longer to reach the vertex of progeny production and displayed a reduced vertex of 79 progeny/day, demonstrating that culture at 15 8C reduces early progeny production ( Figure 1C; Table 1). The total number of progeny was 440 (range of 95-626), similar to hermaphrodites cultured at 20 8C. Hermaphrodites cultured at 15 8C had an increased total reproductive span of 10.6 d, a 29% increase. These hermaphrodites produced 19.3 progeny after day 9 compared to 1.8 progeny after day 9 for hermaphrodites raised at 20 8C, an 11-fold increase ( Table 2). The extended total reproductive span and the increased progeny production late in life indicate that cold temperature delays reproductive aging. A second treatment was dietary restriction. Dietary Values are means with standard error. These data are graphed in Figure 1. *p ¼ 0.01-0.05; **p , 0.01, compared to wild type (line 1). a Wild-type hermaphrodites or mutants were cultured at 20 8C (except lines 3 and 4) on standard media except wild-type animals exposed to 2 mg/ml ethosuximide (þETH). Hermaphrodites were mated to males days 1 and 2 (except line 2). b Total progeny was directly measured for each animal. The vertex was calculated for each animal using a general linear mixed model that assumes a quadratic curve. c The period from L4 until the vertex of progeny production was determined for each animal. restriction extends the life span of many animals, including C. elegans [1]. The eat-2(ad465) loss-of-function mutation reduces pharyngeal pumping rate and food intake and extends adult life span about 47% [48,49]. This eat-2 mutation extends the reproductive span of self-fertile hermaphrodites, indicating that dietary restriction delays sperm depletion [44]. eat-2(ad465) mated hermaphrodites displayed a reduced vertex of 58 progeny/ day, demonstrating a decrease in early progeny production ( Figure 1E and 1F; Table 1). The total number of progeny was significantly reduced to 245 (the range was 106-416). The total reproductive span increased to 10.8 d for mated eat-2 hermaphrodites, a 32% increase. Progeny production after day 9 was increased 8-fold and 5-fold in early and late mated eat-2 hermaphrodites, respectively ( Table 2). These findings indicate that dietary restriction delays reproductive aging. A third treatment was reducing activity of an insulin/IGFsignaling pathway. This pathway regulates life span of C. elegans and several other animals [41]. daf-2(e1370) is a partial loss-of function mutation that affects the kinase domain of the DAF-2 receptor tyrosine kinase, and this mutation causes a dramatic extension of adult lifespan [3]. Mated daf-2(e1370) hermaphrodites had a reduced vertex of 42 progeny/day, demonstrating a decrease in early progeny production ( Figure 1G and 1H; Table 1). Total progeny production was reduced significantly to 248 (the range was 19-493). Nearly every daf-2(e1370) animal mated early in life died because of internal hatching of progeny; this occurs at a lower frequency in mated wild-type hermaphrodites and it complicates the interpretation of the total reproductive span. For progeny production after day 9, the daf-2(e1370) mutation caused a 14fold increase for hermaphrodites that were mated early and survived until day 10, but caused no significant effect for hermaphrodites mated late (Table 2). Overall, these findings indicate that daf-2(e1370) delays reproductive aging. A daf-16 mutation partially suppressed the affects of daf-2 on late reproduction ( Figure 1G and 1H; Table 2). Two additional mutations that extend life span to a lesser degree, daf-2(m41) and age-1(hx546), did not significantly delay reproductive aging (Tables 1 and 2). Thus, reducing the activity of the insulin/IGF-signaling pathway can delay reproductive aging, but the effect was only caused by a specific mutation. One class of mutations decreased early reproduction but did not significantly affect reproductive aging. Loss-offunction mutations in genes important for mitochondrial function such as clk-1(qm30) and isp-1(qm150) extend adult life span about 20% and 63%, respectively [4,50]. clk-1 hermaphrodites display an extended self-fertile reproductive span, indicating that there is a delay in sperm depletion [44]. Mated clk-1 and isp-1 hermaphrodites took longer to reach the vertex of progeny production and had a reduced vertex, indicating that early progeny production is decreased compared to wild type ( Figure 1D; Table 1). The number of total progeny was reduced significantly for clk-1 to 196 (range of 41-374) and for isp-1 to 95 (range of 4-326), consistent with previous reports that the total number of progeny generated by self-fertile hermaphrodites is reduced [51]. clk-1 and isp-1 hermaphrodites did not have an extended total reproductive span and did not produce significantly more progeny than wild type after day 9 (Tables 1 and 2), indicating that these mutations do not delay reproductive aging. Thus, reduced early progeny production in a long-lived mutant is not sufficient to delay reproductive aging. One treatment resulted in normal early reproduction and decreased late reproduction, culture at the elevated temperature of 25 8C. Early mated hermaphrodites raised at 25 8C had a vertex of 124 progeny/day, similar to animals cultured at 20 8C ( Figure 1C; Tables 1 and 2). The total number of progeny generated by hermaphrodites cultured at 25 8C was reduced significantly to 261 (the range was 111-466). The total reproductive span was decreased 32%, and fewer progeny were produced after day 9 in early-mated hermaphrodites, suggesting that culture at 25 8C accelerates reproductive aging. One treatment resulted in normal early reproduction and increased late reproduction, the anticonvulsant ethosuximide. Exposure to 2 mg/ml ethosuximide extends adult life span about 17%, and ethosuximide is likely to function by affecting neuronal activity [5]. Ethosuximide treatment does not significantly affect the self-fertile reproductive span, indicating that drug treatment does not delay sperm depletion [5]. Mated hermaphrodites treated with ethosuximide had a vertex of 109 progeny/day, similar to untreated animals. These hermaphrodites produced a higher total progeny number of 447 (the range was 193-723), although this trend did not achieve statistical significance with this sample size ( Figure 1I and 1J; Table 1). Ethosuximide treatment increased the total reproductive span to 9.2 d, a 12% increase. The number of progeny produced after day 9 increased by 7-fold in both early and late mated hermaphrodites (Table 2). These results demonstrate that ethosuximide treatment delays reproductive aging. Early Progeny Production Does Not Accelerate or Delay Reproductive Aging To address the relationships between early progeny production and reproductive aging using an alternative approach, we used sperm availability to manipulate hermaphrodite reproduction. Wild-type hermaphrodites mated on day 1 generated 424 progeny during the early and middle reproductive period (days 1-7) and 8 progeny late in life (days 8-12) (Figure 2A and 2D). Hermaphrodites that were mated on progressively later days 5, 6, and 8 displayed significantly decreased progeny production days 1-7, but displayed only minor changes in late progeny production. Hermaphrodites with the spe-8(hc50) mutation produce nonfunctional sperm and thus produce no viable selfprogeny. spe-8(hc50) hermaphrodites do mature and ovulate unfertilized oocytes at the normal rate, thus incurring the metabolic costs of reproduction. spe-8(hc50) hermaphrodites mated to wild-type males on day 1 produced 421 progeny, a number similar to mated wild-type hermaphrodites, and they displayed a pattern of reproductive aging similar to wild type ( Figure 2B). spe-8 hermaphrodites mated on progressively later days 3, 5, 7, or 10 displayed dramatically reduced progeny production days 1-7 ( Figure 2B and 2D). Notably, the age-related decline in progeny production was similar for each mating day, and late progeny production was independent of the mating day. Because wild-type and spe-8 hermaphrodites still produce many fertilized or unfertilized oocytes early, we examined fog-2(q71) hermaphrodites that produce no self-sperm and very Hermaphrodites were raised at 20 8C (except lines 2 and 3) on standard media except for wild-type animals exposed to 2 mg/ml ethosuximide (þETH). b Hermaphrodites that were mated on day 1 for about 48 h and survived until day 10 were scored for progeny production. few unfertilized oocytes, since the oocyte maturation signal from sperm is lacking [52]. fog-2 hermaphrodites ovulate at a much-reduced rate, and thus the metabolic effects of continual oocyte production are much less than wild type or spe-8 mutants. fog-2(q71) hermaphrodites mated day 1 produced 509 progeny, a number similar to wild-type hermaphrodites. fog-2(q71) hermaphrodites mated on days 3, 5, 7, or 10 displayed dramatic reductions in early progeny production, but did not display increased late progeny production. Early and late mated hermaphrodites displayed similar patterns of age-related declines in progeny production ( Figure 2C and 2D). These studies demonstrate that early reproduction does not accelerate or delay reproductive aging and indicate that the timing of reproductive aging is independent of the substantial metabolic demands of reproductive activity. In other words, the reproductive system undergoes an agerelated decline in function that appears to be independent of whether the system is used to generate progeny. Discussion Age-related changes of C. elegans reproduction have been analyzed primarily in self-fertile hermaphrodites. The first important issue addressed by our studies of mated hermaphrodites is whether C. elegans hermaphrodites undergo reproductive aging. Self-fertile hermaphrodites display a decline of progeny production beginning about day 2.3, and progeny production ceases about day 6.3. The cessation of progeny production occurs because the supply of self-sperm is completely utilized [38]. Sperm depletion might also contribute to the declining progeny production; alternatively, this decline might be caused by other factors. To address this issue, we compared hermaphrodites that are self-fertile to hermaphrodites that were mated early in life. Mated hermaphrodites displayed a decline in progeny production that begins about day 2.1, indicating that the time of initiation of the decline in progeny production is not influenced by sperm depletion. The decline in progeny production is more gradual in mated hermaphrodites than in self-fertile hermaphrodites, and the cessation of progeny production is delayed until day 8.2 in mated hermaphrodites. These results indicate that sperm depletion in self-fertile hermaphrodites accelerates the decline in progeny production and causes the cessation of progeny production. By contrast, the decline in progeny production displayed by mated hermaphrodites is not caused by sperm depletion. The mated hermaphrodites that we analyzed had sufficient sperm, since they continued to produce cross progeny until the time of reproductive cessation. Furthermore, the male sperm does not appear to become inviable over time, since mated hermaphrodites displayed this decline of progeny production even when they received a fresh supply of sperm late in life. These results demonstrate that the reproductive system in mated hermaphrodites undergoes an age-related decline in function independent of sperm availability. Several prominent theories of aging are based on the concept that using an organ system promotes age-related degeneration. Potential mechanisms include the accumulation of mechanical damage and the accumulation of deleterious products of metabolism, such as reactive oxygen species [53]. For example, contracting the muscular pharynx of C. elegans has been proposed to contribute to age-related declines of pharyngeal function [54]. To investigate how using the reproductive system to produce oocytes affects agerelated degeneration of reproductive function, we manipulated progeny production by controlling sperm availability. To obtain complete control over early progeny production, we utilized fog-2 mutants that are self-sterile and behave like females; they produce no fertilized oocytes and few unfertilized oocytes in the absence of male sperm, so they do not incur the metabolic costs of progeny production until they are mated [52]. When male sperm is available, fog-2 hermaphrodites produce about the same number of progeny as mated wild-type hermaphrodites, indicating that germ-line function is normal with the exception of self-sperm production. If progeny production causes the accumulation of damage or deleterious metabolites that contribute to reproductive aging, then fog-2 hermaphrodites that produce no progeny early in life as a result of sperm limitation are predicted to have delayed reproductive aging. By contrast, if progeny production early in life does not cause reproductive aging, then fog-2 hermaphrodites that produce no progeny early in life as a result of sperm limitation are predicted to have a normal time course of reproductive decline. Our results with fog-2 and two other strains clearly demonstrate that early reproduction neither accelerates nor delays reproductive decline. This result is striking, because progeny production is a major metabolic activity for C. elegans hermaphrodites. These results indicate that reproductive aging in C. elegans hermaphrodites is not controlled by usedependent mechanisms and may be controlled by timedependent mechanisms. The relationships between early progeny production and reproductive aging have not been widely studied, but it has been examined in Drosophila males. High levels of early mating accelerate the decline of late reproduction, suggesting that use-dependent mechanisms influence reproductive aging in Drosophila males [37]. There may be sex or species specific differences in the relationships between early reproduction and reproductive aging. Caloric restriction extends the lifespan of many animals and delays a wide range of age-related degenerative changes. We determined how caloric restriction affects reproductive aging of mated C. elegans hermaphrodites by analyzing eat-2 mutants that have defective feeding, resulting in reduced caloric intake and an extended lifespan [48]. Self-fertile eat-2 hermaphrodites display an extended reproductive span, indicating that there is delayed sperm depletion [44]. Consistent with this observation, caloric restriction caused by the eat-2 mutation reduced early progeny production and the total number of progeny generated by mated hermaphrodites. Furthermore, mated eat-2 mutant hermaphrodites displayed an extended reproductive period and increased progeny production late in life, indicating that caloric restriction delays reproductive aging. These effects of caloric restriction on C. elegans are similar to the effects of caloric restriction on reproduction of females of other species. The only intervention that has been well documented to extend the reproductive period of female rodents is caloric restriction [16][17][18]. Caloric restriction also reduces the total number of progeny produced by female rodents. Similarly, dietary restriction of female Drosophila reduces the number of progeny generated early and the total number of progeny, and in some cases late reproduction is increased [33,34]. While caloric restriction reduces the number of progeny produced early in life by C. elegans hermaphrodites and other animals, our results with sperm manipulation indicate that this does not cause the delayed reproductive aging of C. elegans. We conclude that the reduced progeny production early in life and the delayed reproductive aging are likely to be independent affects of caloric restriction. Caloric restriction delays somatic and reproductive aging in worms, rodents, and flies, indicating that caloric restriction may influence a conserved mechanism that influences both somatic and reproductive tissues. Ambient temperature influences the lifespan of poikilotherm animals such as C. elegans [1]. Compared to the typical culture temperature of 20 8C, higher temperatures such as 25 8C shorten lifespan and lower temperatures such as 15 8C extend lifespan. Lower temperatures also extend the reproductive span of self-fertile hermaphrodites, indicating that sperm depletion is delayed [1,44]. Our analysis of mated hermaphrodites showed that hermaphrodites cultured at 15 8C displayed an extended period of reproduction and higher levels of reproduction late in life compared to hermaphrodites cultured at 20 8C. These results indicate that cold temperature delays reproductive aging. Cold temperature did not affect the total number of progeny significantly. Prominent theories that address the mechanisms of delayed aging caused by reduced temperature include (1) a global effect on the rate of chemical reactions and (2) the induction of a stress response. The result that temperature affects both reproductive and somatic aging is consistent with the model that these effects are mediated by similar mechanisms. Further studies are necessary to define these mechanisms. An evolutionarily conserved insulin/IGF-signaling pathway has a major influence on the adult lifespan of C. elegans, and this pathway also influences the lifespan of other animals [41]. Mutations that affect this pathway have a variety of effects on reproduction of self-fertile hermaphrodites; specific alleles of daf-2 can result in the production of self-progeny late in life [24][25][26] and delay age-related morphological changes in the gonad [40], indicating that reducing the activity of this pathway can delay reproductive aging. By contrast, several mutations that cause an extended lifespan do not extend the self-fertile reproductive span [24,44]. Our results show that reducing the activity of this pathway can delay reproductive aging in mated hermaphrodites. The effect was caused by the daf-2(e1370) mutation, but not by other mutations that cause lesser extensions of lifespan. The effects of this daf-2 mutation were partially suppressed by a loss-of-function mutation of daf-16. A pathway involving daf-2 and daf-16 affects dauer formation and adult lifespan [3], and our results suggest that a similar pathway affects reproductive aging. Reducing the activity of this pathway also reduced the level of early progeny production and the total number of progeny. The reduced level of early progeny production and the delayed reproductive aging are probably independent effects, since reducing early progeny production by sperm limitation does not delay reproductive aging. The anticonvulsant medicine ethosuximide extends C. elegans adult lifespan and delays age-related declines of somatic functions [5]. Ethosuximide is likely to influence somatic aging by modulating neural activity. Ethosuximide treatment does not extend the reproductive period of selffertile hermaphrodites, indicating that sperm depletion is not delayed [5]. In mated hermaphrodites, ethosuximide did extend the period of reproduction and increased late progeny production, indicating that this drug delays reproductive aging. Drugs that delay reproductive aging of C. elegans or other animals have not been previously described. These results indicate that neural activity influences the aging of non-neuronal reproductive cells, suggesting that neural activity acts non-cell autonomously to control aging. Interestingly, ethosuximide treatment did not decrease early progeny production and increased the total number of progeny modestly. Other factors that increase total progeny production of mated C. elegans hermaphrodites have not been reported, but a similar phenomenon has been observed in other animals. Mutations in the Drosophila genes Indy and EcR that increase life span can increase the total number of progeny and number of progeny produced late in life [35,36]. Mating in the ant Cardiocondyla obscurior increases both early and late reproduction [55]. These studies indicate that C. elegans and several other animals have the capacity for additional late reproduction, which is not normally utilized. The relationships between factors that influence reproductive and somatic aging have not been well characterized. Here we demonstrate that four factors that delay somatic aging also delay reproductive aging. These findings suggest that there is a substantial overlap between factors that control these processes. However, several mutations that extend C. elegans lifespan did not delay reproductive aging. One possible interpretation is that some processes affect somatic aging but not reproductive aging. Another possible interpretation is that these processes affect both somatic and reproductive aging, but the mutations we analyzed reveal the effect on somatic aging but not the effect on reproductive aging. The results of these studies have implications for understanding the evolution of reproductive aging. First, our results address the relationships between early progeny production and reproductive aging and are relevant to predictions of the antagonistic pleiotropy theory. The antagonistic pleiotropy theory predicts that increased progeny production late in the reproductive period should be accompanied by decreased progeny production early in the reproductive period [9]. We identified factors that increase late reproduction and decrease early reproduction, consistent with this prediction of the theory. In addition, a factor was identified that increases late reproduction but does not decrease early reproduction, and this is not consistent with this prediction of the theory. Overall, these results suggest that there is no consistent relationship between early progeny production and late progeny production. Therefore, selection during evolution for high levels of early reproduction may not be a cause of reproductive aging in C. elegans. Our results indicate that there is plasticity in the timing of reproductive aging, and we speculate that selection for an optimal number of progeny may determine the timing of reproductive aging. First, C. elegans hermaphrodites that are not sperm limited undergo a complete cessation of progeny production as a result of reproductive aging before the end of the lifespan. A similar pattern is displayed by other female animals, such as humans and guppies [56]. These findings suggest that somatic aging may not limit reproduction, but rather the reproductive system is engineered to fail before the soma fails. In addition, our results indicate that C. elegans hermaphrodites have the capacity to generate a larger number of total progeny and a larger number of progeny later in life, and these capacities are not normally utilized. Studies of other animals reveal similar extra capacity [35,36,55]. Together, these findings indicate that C. elegans is not engineered to generate the maximum possible number of progeny. We speculate that there is an optimal number of F1 progeny, and reproductive aging contributes to the ability of the animal to generate the optimal progeny number (Figure 3). If a species encounters selective pressure for an increase in progeny number, then delayed reproductive aging may evolve. If a species encounters selective pressure for a decrease in progeny number, then accelerated reproductive aging may evolve. According to this theory, environmental factors that establish the optimal progeny number will be critical for the evolution of reproductive aging. Limiting progeny production to an optimal number might act at two levels to maximize reproductive success. At the level of the individual, generating the optimal progeny number may limit competition for resources among the progeny and maximize the probability that the optimal number of F1 progeny mature to become reproductive adults. At the level of the population, generating the optimal progeny number may help to achieve the maximum sustainable number of animals in the population and avoid oscillations in the number of animals in the population that might increase vulnerability to extinction. Further studies are necessary to determine how reproductive aging contributes to the ability of animals to generate the optimal number of progeny and influences population dynamics. Materials and Methods General methods and strains. C. elegans were cultured on 6-cm petri dishes containing NGM agar and a lawn of Escherichia coli strain OP50 at 20 8C unless stated otherwise [57]. Analysis of fertility and reproductive aging. Hermaphrodites were synchronized by selecting animals at the fourth larval stage based on the appearance of the vulva as a dark half circle under a dissecting microscope. L4 hermaphrodites were placed on individual petri dishes (time zero) and transferred to fresh dishes every day until death or at least four days without progeny production. Progeny were counted using a dissecting microscope about two days after transfer. For mating experiments, three young, wild-type males were added to the dish and removed after two days for days 1-5, or after one day for days 6-10 (unless stated otherwise). For matings on days 1-8, hermaphrodites that did not mate to males were recognized by a lack of male progeny and excluded from the data. Sterile hermaphrodites were also excluded from the data. Fertile animals that died during the experiment were included in the data until the day of death for Figure 1 and Table 1, and n values are the number of animals at the start of the experiments. For experiments in Table 2, we mated L4 hermaphrodites to multiple males on days 1 and 2 either individually or in groups of two to six hermaphrodites. Hermaphrodites were transferred to fresh petri dishes every second day until day 10 and then placed on individual petri dishes for progeny scoring. For the matings on day 10, hermaphrodites were transferred individually or in groups of two to six every second day until day 10; on day 10 hermaphrodites were placed on individual plates with three males for one day and then transferred as necessary for progeny scoring. For both protocols, hermaphrodites that died before day 9 were excluded. Experiments using pharmacological compounds. Ethosuximide was obtained from Sigma Chemical (http://www.sigmaaldrich.com) and added to molten nematode growth medium as described [5]. Parent hermaphrodites were exposed to the drug starting from the L4 stage, and self-progeny were maintained on drug-containing dishes for the duration of the experiments. Determination of vertex and spans and statistical analysis. The estimation of the vertex on the longitudinally measured progeny is through a general linear mixed model [65], which assumes a quadratic growth curve for the progeny over time. The implementation of this model is by PROC MIXED/SAS [66]. The equality of time of vertex (reproductive growth span) and progeny level at vertex between wild type and any other group is tested using delta method based on the general linear mixed model (Table 1). In order to avoid the bias from the floor effect on the estimation of the vertex, we did not use small progeny observations at the end of the reproductive span in the general linear mixed model. The total progeny and the total reproductive span (Table 1), progeny after day 9 (Table 2), and progeny days 1-7 and days 8-12 ( Figure 2) are computed for each worm and compared between wild type and any other group by the Wilcoxon's rank sum test.
v3-fos-license
2023-02-17T14:23:23.345Z
2017-11-13T00:00:00.000
256909005
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41598-017-15362-0.pdf", "pdf_hash": "1b506c8b0851a1ac164810f1d002cc3e7ff5cbf4", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:113", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Geology" ], "sha1": "1b506c8b0851a1ac164810f1d002cc3e7ff5cbf4", "year": 2017 }
pes2o/s2orc
Integrating SANS and fluid-invasion methods to characterize pore structure of typical American shale oil reservoirs An integration of small-angle neutron scattering (SANS), low-pressure N2 physisorption (LPNP), and mercury injection capillary pressure (MICP) methods was employed to study the pore structure of four oil shale samples from leading Niobrara, Wolfcamp, Bakken, and Utica Formations in USA. Porosity values obtained from SANS are higher than those from two fluid-invasion methods, due to the ability of neutrons to probe pore spaces inaccessible to N2 and mercury. However, SANS and LPNP methods exhibit a similar pore-size distribution, and both methods (in measuring total pore volume) show different results of porosity and pore-size distribution obtained from the MICP method (quantifying pore throats). Multi-scale (five pore-diameter intervals) inaccessible porosity to N2 was determined using SANS and LPNP data. Overall, a large value of inaccessible porosity occurs at pore diameters <10 nm, which we attribute to low connectivity of organic matter-hosted and clay-associated pores in these shales. While each method probes a unique aspect of complex pore structure of shale, the discrepancy between pore structure results from different methods is explained with respect to their difference in measurable ranges of pore diameter, pore space, pore type, sample size and associated pore connectivity, as well as theoretical base and interpretation. SANS experiments. SANS tests were conducted at the National Institute of Standards and Technology Center for Neutron Research (NIST-NCNR) using the NG7 30 m SANS instrument 19,40 . The scattering vector, or momentum transfer, of a scattered neutron is defined by Q = 4πλ−1 sin (θ/2), where θ is the scattering angle and λ is the wavelength of the monochromatic neutron beam. To obtain a wide scattering vector range of 0.001−0.28 Å −1 , three sample-detector distances (1 m, 4 m, 13 m and 13 m) were used with neutron wavelengths of 6.0 Å (1, 4, and 13 m) and 8.09 Å (13 m lens). In order to reduce multiple scattering, specimens were mounted on quartz glass slides and ground to 150 µm thickness, such that a high neutron transmission and less than 10% multiple scattering are achieved 19 . Raw, 2D data were corrected for detector pixel efficiency, background and empty-cell scattering, as well as sample neutron transmission and volume. Corrected scattering intensities at each detector geometry are normalized to the intensity of the open neutron beam and circularly averaged to produce 1D scattering curves which can be combined to yield the full scattering profile 41 98 . through measuring the adsorption branch of nitrogen under the relative pressure P/P o ranging from 0.010 to 0.995 and desorption branch from 0.995 to 0.010. The total pore volume and pore size distribution were determined using the BJH method 42 . MICP technique. The MICP technique can effectively determine connected porosity and pore-throat size distribution, using a mercury intrusion porosimeter (AutoPore IV 9510; Micromeritics Corporation) 5 . The cubic sample with a linear dimension of 10 mm was dried at 60 °C for at least 48 hours to remove moisture, and cooled to room temperature (~23 °C) in a desiccator with relative humidity less than 10%. Then, low-(5 to 30 psi; 0.034 to 0.21 MPa) and high-pressure (30 to 60000 psi; 0.21 to 413 MPa) analyses were initiated by progressively increasing the intrusion pressure while monitoring the volume change of mercury at a detection limit of <0.1 μL. Pore-throat size distributions from MICP tests were obtained using the Washburn equation 43 , with the confinement correction of contact angle and surface tension of mercury in shale nanopores 27 . The corrected pore-throat diameters cover a measurement range of 50 μm to 2.8 nm for the experimental conditions (e.g., using a penetrometers with a filling pressure of 5 psi) suitable for shale samples with porosities commonly less than 5%. Pore structure parameters (such as pore-throat size distribution, median pore-throat size, pore volume, pore area, and porosity) can be obtained for multiple connected pore networks at nm-μm spectrum 8 . SANS. Scattering length density (SLD) of shale samples. The SLD value is a measure of the scattering strength of a material component which depends on the scattering strength of the constituent scatterers (i.e. nuclear scattering length for neutrons) as well as their average volume density 44 . The porous rock samples can generally be treated as a two-phase system (solid matrix and pore space), where the number of scatterers and the difference of SLD controls the scattering intensity I Q ( ) 19,35 . The SLD for each mineral component in a porous medium can be obtained with the following formula: where N A is Avogadro's number (6.022 × 10 23 mol −1 ), d is the grain density (g/cm 3 ), M is the molecular weight of a mineral component (g/mol), p j is the fraction of phase j within the material, s i is the abundance of nucleus i in phase j, and b i is the coherent scattering amplitude for nucleus i 35 . It is generally accepted to use an average SLD calculation for the mineral matrix, including organic matter 45 . Average matrix SLD of rock sample can be calculated using the following formula 29 : where k is a mineral component (including organic matter); n is the total number of components; SLD(k) is the SLD of mineral component k. Analysis of the SANS data. As is commonly observed in complex heterogeneous systems like shale, log-log scattering profiles (i.e. I(Q) vs. Q) of the four studied samples exhibit a linear trend (i.e. power law scattering) in the low-and intermediate-Q range (<0.1 Å −1 ), and a gradual flattening of the curves towards a constant background scattering value at high-Q (Fig. 1A). This flat background can originate from the incoherent scattering of Table 2. Mineral composition in weight and volume percent of the studied samples. Hydrogen atoms in the organic matter and adsorbed water of shale, as well as the coherent scattering from pores <25 nm in the rock matrix 29,34,46 . The background values are obtained from the slope of the plot of Q 4 I(Q) vs. Q 4 , where high-Q values dominate 47 . Niobrara marl exhibits background scattering that is substantially larger than those observed in the other samples. High-Q region of the scattering profiles shows a substantial change after performing background subtraction (Fig. 1B). In the meantime, the reliable data with a wide power-law distribution extend up to Q ≈ 0.25 Å −1 with a slope of the linear region ranging from 3.1 to 3.5 for four samples, which is related to surface fractal dimensions of D s = 2.5 to D s = 2.9 29 . As surface fractal dimension ranges from 2 (perfectly smooth surfaces) to 3 (extremely rough surfaces) 34 , the Wolfcamp, Bakken and Utica shales have slightly rough pore-matrix interface because their D s values are close to 3 (2.8-2.9), while the Niobrara marl has a smaller surface fractal dimension (2.5) indicating a slightly smooth interface 46 . In other words, when the surface fractal dimension is up to 3, the pore-matrix interface is folded and almost completely fills the pore space. In fact, the high porosity Niobrara marl is dominated by mineral-associated pores to be angular and sharp edged, which reflects a slightly smooth pore-matrix interface 48 . Porosity and pore-size distribution from SANS test. Neutron scattering profiles obtained from a polydisperse pore system are characterized by power law distribution (i.e. linear slope) 49 . The polydisperse spherical pore (PDSP) model has been widely applied to analyze total porosity and pore-size distribution in sedimentary rocks from SANS data 29,50 . The scattering intensity fitted by the PDSP model can be represented as 50 : where V is the average pore volume; ρ ⁎ 1 and 2 ρ ⁎ are the SLDs of -matrix (including organic matter) and pore space, respectively; φ is the total porosity of the shale sample; R max and R min represent the maximum and minimum pore radius, respectively; f r ( ) is the probability density of the power-law pore size distribution; and P Q ( ) is a form factor of a sphere described by Eq. 4 29 : For all four samples, the PDSP model fits the linear region of the background-subtracted scattering profile (10 −3 < Q < 0.25 Å −1 ), corresponding to pore radii ranging from 1 to 300 nm. The PDSP-model porosity for four samples shows an obvious difference (Table 3). With a value of 2.91%, the Niobrara marl displays the highest porosity, compared to 1.99% for Wolfcamp, 1.53% for Bakken and 1.48% for Utica shale samples. The model-independent Porod invariant method is also widely used to determine the total porosity of shale samples 45 . The porosity is estimated through the Porod invariant Q inv in a two-phase system 35,51 : The evaluation of Q inv is performed by three Q-domain integration: (a) unmeasured domain between 0 and Q min (10 −3 Å −1 ); (b) experimentally accessible domain Q min < Q < Q max = 0.5 Å −1 ; and (c) unmeasured Q > 0.5 Å −1 . For direct comparison, we also present results based on Porod invariant using the same I Q ( ) data with PDSP model. Total porosities obtained from the PDSP model and the Porod invariant show a slight difference, but a similar trend for the four shale samples (Table 3). Porod invariant porosity is systematically higher than that obtained from the PDSP model, which we attribute to the inclusion of extrapolated low-and high-Q values not considered in the PDSP analysis. In the same Q range, the PDSP porosity is slightly lower, but agrees well with Porod invariant porosity. Neutron scattering obtained from polydisperse pore system is characterized by power law distribution 50,52 . Therefore, the PDSP model is reasonable to fit the linear portion of the scattering profile, where Q value ranges from 10 −3 to 0.25 Å −1 in this study. Pore size distributions f r ( ) are obtained from applying the PDSP model using PRINSAS software 49 and are illustrated in Fig. 2A. The pore size distributions (ranging from 2 to 600 nm) follow a similar decreasing trend with an increase of pore size. In order to provide more direct information of the pore volume with respect to pore diameter, we convert the probability density of pore distributions f r ( ) into pore volume distributions (Fig. 2B). All samples display a bimodal pattern of pore size distributions, with peaks at ∼2 nm and 460-600 nm of pore diameters. The Niobrara marl sample shows a high pore volume at pore diameters ranging in 2-4 nm and 30-600 nm. In contrast, the Bakken shale sample shows a relatively low pore volume in the entire range of SANS-measurable pore diameters (2-600 nm), except for a slightly high value for pores >380 nm. The cumulative pore volume distributions are calculated by summing individual pore volumes and shown in Fig. 2C. The Niobrara marl exhibits the highest pore volume (4.35 × 10 −2 cm 3 /g) and the Bakken shale sample the least (1.45 × 10 −2 cm 3 /g), with the average pore volumes of four samples being 2.34 × 10 −2 cm 3 /g (Table 3). For the Niobrara marl sample, the cumulative pore volume increases very rapidly at pore diameters smaller than 2.4 nm and larger than 200 nm, where the slope of the curve is much steeper than other pore regions. The cumulative pore volume of the Wolfcamp shale is higher than the Bakken and Utica shales at pore diameters larger than 8 nm and 13 nm, respectively. In additoion, the cumulative pore volume distributions of the Wolfcamp, Bakken and Utica shale samples are very close at pore diameters smaller than 20 nm (Fig. 2C), while the volume of Wolfcamp shale increases rapidly at pore diameters larger than 80 nm. LPNP. Isotherms of N 2 adsorption and desorption. Nitrogen isotherms of adsorption and desorption for four shale samples are shown in Fig. 3. The adsorbed volume ranges from 3.98 to 13.3 cm 3 /g; in particular, the Niobrara marl show a much higher value than other samples. According to the IUPAC classification 53 , the N 2 adsorption isotherms of four samples exhibit type IV adsorption, but without plateaus at high relative pressure regions. The adsorbed volume rises slowly at low relative pressure region until the relative pressure is close to 1.0, which indicates the presence of a certain number of both mesopores (2-50 nm) and macropores (>50 nm) in these shales. In addition, although the adsorbed volume is low, all isotherms show some adsorption at low relative pressure (p/p o < 0.01) interval, relating to the presence of micropores (<2 nm). The 'forced closure' is present in the desorption branch at p/p o about 0.45, which is related to the so-called 'tensile strength effect' , which is attributed to an instability of the hemispherical meniscus in pores during the desorption stage 54 . All isotherms show a type H 3 pattern of hysteresis loop, which is related to the presence of slit-shaped pores 26,53,55,56 . However, this interpretation may be subjected to errors based on the SEM and SANS analysis 2,26 . Pore-size distribution by LPNP method. Figure 4 shows the results for pore-size distribution (dV/dD vs. D and dV/ dlog(D) vs. D) and cumulative pore volume obtained with BJH model for the adsorption branch. The plot of dV/dD vs. D shows a broad pore size distribution (1-300 nm) with the peak value around 1.3-1.5 nm. The concentrations display a decreasing trend with an increase of pore size in all samples (Fig. 4A). The Niobrara marl shows another broad peak in pore sizes ranging from 20 to 60 nm, which is obviously different from other samples. The plot of dV/dlog(D) vs. D proportioned the real volumes reveals that pores sizes larger than 10 nm significantly contribute to the total pore volume (Fig. 4B). The contribution of pores sizes ranging from 1-2 nm and 10-300 nm in Niobrara marl sample shows the same trend with the plot of dV/dD vs. D, where the value is much higher than that of other three shale samples. The pore-size distribution of the Bakken and Utica shale samples shows a similar trend in both plots ( Fig. 4A; B). The cumulative pore volumes were determined from adsorption isotherms using the BJH model (Table 3, Fig. 4C). The Niobrara marl exhibits the highest pore volume (1.41 × 10 −2 cm 3 /g), whilethe Bakken sample shows the least (0.43 × 10 −2 cm 3 /g), with the average pore volumes of four samples being 0.71 × 10 −2 cm 3 /g. Other than the Niobrara marl sample, these values quite agree with Barnett, Haynesville, and Eagle Ford shale samples analyzed by Clarkson et al. (2013) 3 . This work also shows that all samples have a large amount of pore volume in pore sizes larger than 10 nm, especially for the Niobrara marl sample. MICP. Mercury intrusion and extrusion curves. The plots of cumulative mercury intrusion and extrusion volume vs. the corresponding pressure for four shale samples are shown in Fig. 5. With an increase of intrusion pressure from 0.21 to 413 MPa (corresponding to pore-throats ranging from 50 µm to 2.8 nm), the intrusion Table 3. Porosity and cumulative pore volume of the studied samples obtained from different methods. Porod 1 porosity is calculated from the extrapolated range of Q on both ends of scattering profiles, while Porod 2 porosity is from the Q range of 10 −3 Å −1 to 0.25 Å −1 . MICP 1 contains the cumulative pore volume for pore-throats ranging from 2.8 nm to 50 µm (a full range measurable for shale samples by MICP), as compared to MICP 2 from 2.8 nm to 600 nm (measurable range of SANS analyses for comparison). volume of mercury gradually increases in all samples, indicating the presence of both mesopores and macropores. Specifically, the cumulative intrusion volumes at the highest pressure point show an obvious difference of Niobrara marl (17.8 µL/g) > Bakken shale (6.6 µL/g) > Utica shale (3.1 µL/g) > Wolfcamp shale (2.0 µL/g) ( Table 3). Similar to LPNP tests, hysteresis also occurs in mercury intrusion and extrusion cycles for all samples, indicating that about 20-90% of mercury remains trapped in samples after the pressure recovering to the initial one (Fig. 5). In general, during the extrusion from narrow pore necks, the snapping-off of the liquid meniscus would lead to a formation of the isolated droplet in the ink-bottle shaped pores 57 , which partly contributes to the entrapment of mercury. Porosity and pore-throat distribution by MICP method. The total connected porosity of four shale samples shows an obvious difference, ranging from 0.51% to 4.36%, with the average value of 1.82% (Table 3). Such MICP porosity is the largest in the Niobrara marl sample and the smallest in the Wolfcamp shale sample. The pore-throat size distribution and cumulative pore volume results are shown in Fig. 6 for four shale samples. Pore-throat distribution shows a much broader range (2.8 nm-50 µm) than LPNP and SANS results. All samples display a unimodal pattern of pore size distributions with a broad peak between 3 and 10 nm in pore-throat diameters. There is a remarkable peak at 10 nm in pore-throat diameter for the Niobrara marl, which is much higher than other samples. Noticeably, the volumes of pores with a pore-throat diameters less than 50 nm dominate in all shale samples, with a proportion in the order of Niobrara marl (92.6%) > Wolfcamp shale (86.9%) > Bakken shale (73.7%) > Utica shale (67.5%). The results are consistent with previous studies on Barnett, Horn River, Longmaxi, Marcellus, Woodford, and Utica shales using both MICP and LPNP analyses 14,[58][59][60][61][62] . In addition, from the plot of pore-throat diameter vs. cumulative pore volume, the total pore volumes of four samples exhibit an obvious difference, which is mainly ascribed to the volume of pores with pore-throats less than 20 nm (Fig. 6C). (Table 3). In contrast, MICP method provides a wider pore-throat size ranging from 2.8 nm to 50 µm, while SANS and LPNP methods only detect pore features of 2-600 nm, and 1-300 nm, respectively. In order to provide more direct comparison, we also calculate the MICP porosity and cumulative pore volume in the pore-throat diameter range of 2.8-600 nm, and cumulative pore volume in the pore diameter range of 2-600 nm from the LPNP method, which are much lower than the SANS method (Table 3). This result is likely related to the ability of SANS to probe total pore volume, including both open (accessible) and closed (inaccessible) pores, while fluid-invasion methods can only examine pores accessible to corresponding fluid molecules under the conditions of the experiment 29,35 . From this consideration, we deduce that inaccessible pores contribute significantly to the total pore volume in the four shale samples. Inaccessible pores can occur as both intraparticle (within minerals and organic matter) and interparticle, and both could be widely developed in shale, coal, carbonate and siltstone 2,3,20,30,37 . We will specifically discuss the inaccessible porosity of these four American shale samples in next section. The pore volume at throat diameters of 2-2.8 nm is not measured by the MICP method, but pores of this interval are shown to contribute a significant portion of volume to the total porosity as measured using SANS and LPNP. There are similar porosity and cumulative pore volume distributions obtained from the SANS and LPNP methods, with an order of Niobrara marl > Wolfcamp shale > Utica shale > Bakken shale, and these show a discrepancy from the MICP results (Table 3). Based on MICP tests, although the Niobrara marl sample exhibits higher porosity and cumulative pore volume than the other samples, the Wolfcamp and Utica shale samples display lower values than the Bakken shale sample. There are at least two possible reasons for such a discrepancy. On one hand, the pore-throat size controls the intrusion of mercury into a connected pore network. According to the Washburn equation 43 with the confinement correction for shale nanopores 27 , the pores with throats larger than 2.8 nm can be filled by mercury at the maximum pressure (413 MPa) achieved by the instrument. Figure 4 shows that a greater amount of pores with diameters less than 2.8 nm exists in the Wolfcamp shale compared to the Bakken shale. It is probable that there is a larger pore volume connected via pore throats less than 2.8 nm in the Wolfcamp shale than that in the Bakken shale, which causes the discrepancy of MICP with other methods. On the other hand, due to capillary resistance, effective pressures can be generated during MICP tests, which would result in pore collapse or crack closure 21,63 . Meanwhile, ductile minerals such as organic matter and clays in mudstone might also undergo an elastic deformation 64 . According to high-pressure Wood's metal injection and Broad Ion Beam-SEM imaging results 21 , plastic deformation of the clay matrix leads to a cutting-off of pore pathways in the silt-rich Boom Clay. The Utica and Wolfcamp shale samples contain higher clay mineral contents than other samples (Table 2), which may cause the lower volume of mercury injection during the MICP test. Comparison of pore-size distribution. To aid the comparison of the pore-size distribution obtained from SANS and from fluid-invasion methods, we unify the coordinate system (dV/dD vs. D), and compare the results with combined data from different methods (Fig. 7). The SANS and LPNP methods (both measure pore bodies) give comparable pore-size distributions in all samples, especially pore diameters larger than 10 nm, indicating a relatively good pore connectivity at this pore size range. On the contrary, the relatively large difference of pore size distributions between SANS and LPNP in the Wolfcamp and Utica shale samples indicates a poorly-connected pore system, which is also a possible reason to account for low porosity and cumulative pore volume obtained from the MICP test. There is an obvious discrepancy between MICP results and other methods for all samples. The pore-size distribution of MICP results is higher than SANS and LPNP data for pore sizes <8-20 nm, beyond which the values decrease rapidly. More importantly, MICP measures is essentially a pore-throat distribution, while the SANS and LPNP methods quantify pore bodies. Therefore, the incremental volume of mercury at a given pressure represents the connective pore volume through a corresponding diameter of pore throat during MICP tests 21 . A number of publications have shown that MICP would overestimate the small pores and underestimate the large pores in various fine-grained materials, because of complicated pore shapes [64][65][66] . In fact, pore-throat distributions are strongly controlled by pore shapes. SEM observations show that many ink bottle-shaped pores are common in mudstones 11,67 , thus the larger volume of "bottle" pore bodies are counted within the "neck" pore throat size range during MICP analysis, which tends to skew the apparent pore size distribution. Measured hysteresis between the intrusion and extrusions curves can provide pore-body/pore-throat ratios 68 , which is indeed important to understand the pore shape information. The work of Anovitz and Cole (2015) on shale pointed out that with the decrease of the pore size, the size of pore-body gets close to pore-throat 69 . Figure 8 shows the MICP results of pore-body/pore-throat ratio for the four shale samples, which range from 1.4 to 420 with the 2.8-3.7 nm pore throat diameter interval (Fig. 8). Body-throat ratios generally decrease between samples from Niobrara marl (ranging in 39-420) > Wolfcamp shale > Utica shale (5.2-7.5) > Bakken shale (1.4-1.9). The pore-body/pore-throat ratios of these four shale samples are generally consistent with Posidonia Shale (ranging from 1 to 2000 at 3 to 7.2 nm interval in pore-throat diameters) from the Hils area in Germany as reported by Klaver et al. (2012) 67 . The high pore-body/pore-throat ratio is the main factor to result in an overestimation of small pores and an underestimation of large pores for MICP method. The Niobrara marl shows a substantially higher pore-body/pore-throat ratio than other samples, leading to a larger difference of pore size distribution between MICP and other methods at pore throat diameters smaller than 10 nm (Fig. 7A). In addition, another possibility for the discrepancy of MICP results is that the compression of the samples at high intrusion pressures would lead the peak to move towards smaller pore size 70 , although compression has only a minor Determination of multi-scale closed (inaccessible) porosity. The connectivity of pores exerts a significant contribution to matrix permeability and gas diffusion in pores 71 . Closed porosity may be a key factor to control oil/gas storage, transport pathways, and production behavior 23,32 . SANS data provide the information of total porosity, while fluid-invasion methods give the connected porosity accessible from sample edge. Therefore, the fraction of closed porosity inaccessible to fluids such as N 2 and mercury can be determined by comparing the difference between porosity obtained by SANS and fluid-invasion methods ( Table 4). The total inaccessible porosity for mercury ranges from 52.3% to 89.1% for four samples at a pore-diameter interval of 2.8 to 600 nm; while for N 2 ranges from 46.3% to 67.1% by LPNP data at an interval of 2 to 300 nm. The difference is attributed to the low value of porosity obtained from MICP. Inaccessible porosity in these four samples is slightly higher than Barnett shale with a reported inaccessibility of CD 4 to be about 30% 28 , and Alberta Cretaceous Shale in Canada with a value of 20-37% 43 , but is close to 69.9% reported for over-mature Longmaxi carbonaceous shale in China 31 . In order to investigate the relationship between the inaccessible porosity and pore size, we also calculate the multi-scale (five pore-diameter intervals) inaccessible porosity for N 2 using SANS and LPNP methods, as they both measure pore bodies ( Table 4). The inaccessible porosity of the four samples at different pore-diameter intervals shows different values and distributions (Fig. 9). Overall, the high inaccessible porosity occurs at pore diameters <10 nm, the value ranging from 60.2% to 97.9 with an average of 86.1%. At pore diameter range of 10-50 nm, the Wolfcamp, Bakken and Utica shale samples show a similar value of inaccessible porosity (56.4-57.9%); while the Niobrara marl exhibits a very low inaccessible porosity of 4.49%, where occurs a "overlap" region of the SANS and LPNP analysis (Fig. 7A). The distributions of inaccessible porosity for the Niobrara marl and Wolfcamp shale display a similar trend with an initial increase followed by a decrease, which is consistent with CD 4 inaccessible porosity of Barnett shale 3 . In contrast, inaccessible porosity of Bakken and Utica shale samples decrease with the increasing pore diameter. Specifically, the volume of inaccessible pores in these two shale samples is low at the region of pore diameter > 100 nm, which explains the agreement of the corresponding pore volume distributions obtained from SANS and LPNP methods ( Fig. 7C; D). Possible factors leading to closed porosity. Pore networks in shale are complex and controlled by the primary composition of grain assemblages, organic matter content, thermal maturation and burial diagenesis 11,[72][73][74][75][76] . In this study, we preliminarily analyze the influencing factors leading to the closed porosity. Closed pores can develop within mineral and organic matter particles (as intra-particle pores) and between mineral particles or mineral and organic particles (as inter-particle pores). The pore volume in organic matter particles increases during thermal maturation, and large amounts of organic matter pores form in the post-mature stage 10,18 . Based on FIB-SEM observations, the organic particles in Posidonia shales (at R o of 1.45%) from Hils area only contain small pore networks, which are not well connected with the surrounding mineral matrix 77 . The four samples in this study are all in the oil window with a similar maturity level (R o ranging from 0.67% to 1.05%), indicating that organic matter pores should only just be developing 60,[78][79][80] . We suggest that the lack of abundant pores in the organic matter results in low connectivity of organic pores. Kuila and Prasad (2011) also suggest that pores in clay samples with diameters less than 10 nm are associated with interlayer spaces 81 . In their BIB-SEM study of early mature Posidonia Shale from the Hils area in Germany, Klaver et al. (2012) 67 indicated that large pores in fossils and calcite grains are connected through a low-permeable clay-rich matrix, which controls the connectivity of the matrix. Similar results were reported by Keller et al. (2011) 82 , who also found that connective porosity in Opalinus clay is just about 10-20% based on FIB-SEM technique. According to the multi-scale inaccessible porosity data, the closed pores with diameters <50 nm (especially <10 nm) are dominant in the four shale samples. A large number of studies have shown that the diameters of organic matter-hosted and clay associated pores are mainly located in this range 10,11,14,83 . There is a significant correlation being observed between closed porosity and clay contents in the four shale samples. The Niobrara marl is dominated by calcite (peloids) with minor amounts of TOC and clay minerals. According to the work of Michaels (2014) 48 , the Niobrara Formation has undergone significant extents of diagenesis in the form of compaction, pressure-solution, and the subsequent reprecipitation of the pressure-solved calcite. However, all Michaels' samples exhibited some amounts of intercrystalline pores associated with the calcite in peloids. The mineral-associated interparticle pores were the most abundant pore type within Niobrara marl samples 46 . Therefore, the Niobrara marl sample displays a low closed porosity, especially in pore diameters ranging from 5 to 10 nm. With higher clay contents than the Niobrara sample, the Utica and Wolfcamp shalesdevelop a higher inaccessible porosity at pore diameters of <50 nm interval. Plastic deformation of the clay matrix my lead to a cutting-off of pore pathways during compaction, which can contribute to a significant volume of the closed porosity. A presence of pores with diameters of 2-5 nm in shales can be correlated with the dominance of the illite-smectite type of clays 84 . 'Intra-tachoid' pores (∼3 nm) are reported to be formed by a stacking of elementary unit cells in tachoids, which are building block of illite-smectite clays in rock-physics modelling of shales 84 . These incompressible 3 nm pores are hard to connect. Therefore, the high inaccessible porosity occurs at pore diameters ranging 85 and our SEM observations, the organic matter pores are not well developed in the Bakken Shale, while inter-and intraparticle mineral pores are the dominant pore types. We interpret that abundant plastic organic matter, at 9.79% of TOC content, can clog the pore throat to lead to more isolated pores after compaction due to the lack of more solid mineral frameworks. Note that the SEM imaging of organic-rich shales can only resolve pores with sizes larger than 5 nm, while pores in 2 nm are near the instrument's resolution 20 . The absence of imaged pores of pores with diameters of 2-5 nm suggests that there is still an ambiguous structure within either the mineral or organic matter. Evaluation of methods used for pore structure analyses. Pore structure plays a significant influence on reservoir quality, oil/gas contents and fluid properties in shale 15,58,76,86,87 . The unique property of shale pore structure obtained from different measurement methods indicates that the accuracy of pore structure measurements may be problematic, although the individual measurement techniques are sound 88 . The discrepancy of results from different methods is partly related to the different detection ranges of the pore diameters, space (pore body or pore throat), pore type (open or closed pores), theoretical bases and data deduction models, but a single factor cannot thoroughly explain all of the differences. For the SANS methods, the intensity of scattered neutrons is highly sensitive to choice of the average SLD value for the rock matrix. Error in the measured porosity and pore-size distributions could result from core-scale shale heterogeneity leading to inaccurate SLD calculation. It is generally accepted to use an average SLD calculation for the mineral matrix, but the SLD likely varies with pore size 28 , due to the size dependence of both the geometry and associated pore wall material of shale porosity. For example, large pores mainly occur between mineral particles, and small pores are commonly developed in organic matter particles in shale, so both minerals and organic matter contribute to the significant value to SLD. In addition, high-density minerals, such as pyrite, could have a strong effect on calculated SLD 45 . However, pyrite only contributes 0 and 3.6% to the bulk mineralogy in our samples, and its influence can be excluded. One of the major uncertainties of the LPNP method is the influence of sample crushing. Primary structure and fabric will change at the microscopic level, which inevitably changes the surface properties of samples and may alter the original pore structure or generate new pore space [89][90][91] . During a crushing process, compression and shear forces acting on the samples also generate smaller fragments and induce fracture propagation as well 92 . In addition, no unified standardization of grain size is used for LPNP tests raises the question about direct data comparisons among different laboratories 93 . In general, the use of smaller shale particles would increase measured micro-and meso-pore volumes by enhancing pore accessibility according to Han et al. (2016) and Wei et al. (2016) 94,95 , who studied the sample sizes from 0.113 to 4 mm and from 0.075 to 0.25 nm, respectively. Sample size is probably a major contributor to the observed difference among methods in this work, especially for the MICP method in the use of 10-mm sized cubic sample, while SANS (150-μm thick thin section) and Figure 9. Plots of total and inaccessible pore volume vs. multiple-scale pore diameter intervals for the four samples using SANS and low-pressure N 2 adsorption methods. LPNP (500-850 μm) methods use a similar, and much smaller, sample size. The work of Hu et al. (2012; 5,96 indicates the distribution of edge-accessible connected pore spaces in rock, and the sample-size dependent pore connectivity is more pronounced for fine-grained shale. To further assess the sample size effect on pore accessibility, we are currently measuring both bulk and particle densities, and MICP porosities for a range of rock with a wide range of pore connectivity, at multiple sample sizes. The SANS method examines the total porosity and cannot give the information of connective pores. However, these interconnections, namely pore throats, are of critical importance to oil/gas transport. Although the SANS and LPNP methods provide comparable pore-size distributions, they cannot distinguish pore bodies from pore throats. The MICP measurements provide a direct information of pore throats, though it results in some discrepancies of pore size distribution results noted between MICP and other methods. The high hydrostatic pressures during MICP tests likely leads to inelastic deformation via compaction of shale samples 63 . According to the work of Penumadu and Dean (2000) 97 , the compaction volume can reach up to 20% in kaolin clay. For validating the MICP method, we have utilized several approaches such as runs without solid samples and with impervious samples. In addition, we have been periodically (monthly) analyzing a standard sample (soft alumina silica) with a mono-modal pore size of 7 nm. The results show that no changes of pore-throat size are observed with the same test protocol as for shale samples (e.g., pressures up to 60000 psi), suggesting that the material compression effect is very small using our developed MICP method. With the awareness of limitations associated with each method, the combined methodology of each complementary technique serves as an effective way to understand the pore structure of shales, and calculate the multiple-scale inaccessible porosity. Conclusions (1) SANS is an effective measure to determining total porosity and pore-size distribution in shale with a strong presence of nm-sized pore spaces. (2) The pore structure of four typical American shale formations obtained from SANS and fluid-invasion methods shows an obvious difference, though SANS and LPNP methods give comparable pore-size distributions considering their similarity in sample sizes and measurement of pore bodies. The discrepancy between MICP results and those from SANS and LPNP methods is attributed to characterization of porethroat distribution. (3) Multiple-scale (five pore-diameter intervals) inaccessible porosities for N 2 are determined using SANS and LPNP data. Overall, the high inaccessible porosity (ranging from 60.2% to 97.9% with average of 86.1%) in four shale samples occurs at pore diameters <10 nm, which we attribute to isolated organic matter-hosted and clay-associated pores.
v3-fos-license
2019-01-22T22:20:09.890Z
2018-07-18T00:00:00.000
56481207
{ "extfieldsofstudy": [ "Chemistry", "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.life-science-alliance.org/content/lsa/1/6/e201800259.full.pdf", "pdf_hash": "a54cd0db4c99a3c490b8e5cdb97d1ad31e9ce28a", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:118", "s2fieldsofstudy": [ "Biology" ], "sha1": "a54cd0db4c99a3c490b8e5cdb97d1ad31e9ce28a", "year": 2018 }
pes2o/s2orc
Mouse REC114 is essential for meiotic DNA double-strand break formation and forms a complex with MEI4 Mouse REC114 is essential for meiotic DNA double-strand break formation and forms a complex with IHO1. Its N-terminal region forms a Pleckstrin homology domain, while its C-terminal region is interacting with MEI4. Introduction The conversion from diploid to haploid cells during meiosis requires the expression of a specific and highly differentiated meiotic program in all sexually reproducing eukaryotes. Indeed, meiosis is a specialized cell cycle composed of one replication phase followed directly by two divisions. At the first meiotic division, homologous chromosomes (homologues) are separated through a process called reductional segregation. In most species, reductional segregation requires the establishment of connections between homologues. To achieve this, homologous recombination is induced during meiotic prophase to allow homologues to find each other and to be connected by reciprocal products of recombination (i.e., crossovers) (Hunter, 2015). This homologous recombination pathway is initiated by the formation of DNA double-strand breaks (DSBs) (de Massy, 2013) that are preferentially repaired using the homologous chromatid as a template. Meiotic DSB formation and repair are expected to be tightly regulated because improper DSB repair is a potential threat to genome integrity (Sasaki et al, 2010;Keeney et al, 2014). In Saccharomyces cerevisiae, several genes are essential for their formation and at least five of them are evolutionarily conserved. Spo11, Top6bl, Iho1, Mei4, and Rec114 are the mouse orthologs of these five genes (Baudat et al, 2000;Romanienko & Camerini-Otero, 2000;Kumar et al, 2010;Robert et al, 2016a;Stanzione et al, 2016;Tesse et al, 2017) and are specifically expressed in mouse meiotic cells. SPO11 is a homologue of TopoVIA, the catalytic subunit of archea TopoVI, and is covalently bound to the 59 ends of meiotic DNA breaks. This indicates that meiotic DSBs are formed by a mechanism with similarity to a type II DNA topoisomerase cleavage (Bergerat et al, 1997;Keeney et al, 1997;Neale et al, 2005). SPO11 acts with a second subunit, TOPOVIBL, homologous to archea TopoVIB (Robert et al, 2016a;Vrielynck et al, 2016). TOPOVIBL is quite divergent among eukaryotes, and in some species, such as S. cerevisiae, the homologous protein (Rec102) shares only one domain of similarity with TOPOVIBL (Robert et al, 2016a). Based on previous interaction studies between Rec102 and Rec104 (Arora et al, 2004;Jiao et al, 2003), which are both required for meiotic DSB formation, it has been proposed that the S. cerevisiae Rec102/Rec104 complex could fulfill the function of TOPOVIBL (Robert et al, 2016b). The IHO1, MEI4, and REC114 families, which have been shown to be evolutionary conserved (Kumar et al, 2010;Tesse et al, 2017), have been studied in several organisms, including S. cerevisiae (Mer2, Mei4, and Rec114), Schizosaccharomyces pombe (Rec15, Rec24, and Rec7), Arabidopsis thaliana (PRD3, PRD2, and PHS1), Sordaria macrospora (Asy2, Mei4 ortholog not identified, and Asy3), Caenorhabditis elegans (Mer2 and Mei4 orthologs not identified, and DSB1/2), and Mus musculus. Several important properties of 1 these proteins suggest that they act as a complex. Indeed, in S. cerevisiae (Li et al, 2006;Maleki et al, 2007) and M. musculus (Stanzione et al, 2016), they were shown to co-localize as discrete foci on the axes of meiotic chromosomes which are the structures that develop at the onset of meiotic prophase and allow the anchoring of chromatin loops. Their localization is SPO11-independent, as shown for the three S. cerevisiae, for S. macrospora Asy2 (Tesse et al, 2017), for the IHO1 and MEI4 M. musculus proteins, for S. pombe Rec7 (Lorenz et al, 2006), and for C. elegans DSB1/2 Stamper et al, 2013). They appear before or at the beginning of meiotic prophase and the number of foci decreases as chromosomes synapse in S. cerevisiae (Li et al, 2006;Maleki et al, 2007) and in M. musculus (Kumar et al, 2010;Stanzione et al, 2016). In S. macrospora, where only Mer2 has been analyzed, its axis localization is also decreased at pachytene upon synapsis (Tesse et al, 2017). In C. elegans, foci of the Rec114 orthologs decrease with pachytene progression Stamper et al, 2013). Importantly, in addition to interaction with axis-associated sequences, these proteins are also detected by chromatin immuno-precipitation (ChIP) to interact with DSB sites (Sasanuma et al, 2008;Panizza et al, 2011;Miyoshi et al, 2012;Carballo et al, 2013), which makes sense given their implication in DSB formation. This interaction is weak and transient and is consistent with the tethering of chromatin loops to axes proposed to be established at or before DSB repair (Blat et al, 2002). The determinants of their localization are known to depend on several axis proteins. In S. cerevisiae, Mer2, Mei4, and Rec114 are detected particularly at domains enriched in Hop1 and Red1, two interacting meiotic-specific axis proteins (Panizza et al, 2011). The localization of Red1 depends on the meiotic-specific cohesin Rec8 through a direct interaction (Sun et al, 2015). Molecular organization and activity of the Mer2, Mei4, and Rec114 complex has remained elusive however. Several studies have reported the interactions between these three proteins, suggesting a tripartite complex in S. pombe (Steiner et al, 2010;Miyoshi et al, 2012), S. cerevisiae (Li et al, 2006;Maleki et al, 2007), and M. musculus (Kumar et al, 2010;Stanzione et al, 2016). The current knowledge on their in vivo direct interactions is limited and based only on yeast two-hybrid assays. Mer2 plays a central role and seems to be the protein that allows the recruitment of Mei4 and Rec114 on chromosome axes. This view is based on the observation that the Mer2 orthologs Rec15 in S. pombe and IHO1 in M. musculus, interact with the axis proteins Rec10 (Lorenz et al, 2006) and HORMAD1 (Stanzione et al, 2016), respectively. In S. cerevisiae, Mer2 is necessary for Rec114 and Mei4 recruitment to the axis (Sasanuma et al, 2008) (Panizza et al, 2011). S. cerevisiae Mer2 is loaded on chromatin before prophase, during S phase, where it is phosphorylated, a step required for its interaction with Rec114 (Henderson et al, 2006;Murakami & Keeney, 2014). Thus, Mer2 coordinates DNA replication and DSB formation. Analysis of the Mer2 ortholog in S. macrospora revealed additional functions in chromosome structure (Tesse et al, 2017). Overall, it is thought that this putative complex (Mer2/Rec114/ Mei4) might directly interact with factors involved in the catalytic activity (i.e., at least Spo11/Rec102/Rec104 in S. cerevisiae) at DSB sites. Interactions between Rec114 and Rec102 and Rec104 have been detected by yeast two-hybrid assays (Arora et al, 2004;Maleki et al, 2007). Moreover, in S. pombe, an additional protein, Mde2 might bridge the Rec15/Rec7/Rec24 and Rec12/Rec6/Rec14 complexes (Miyoshi et al, 2012). However, no specific feature or domain has been identified in Mei4 or Rec114 to understand how they may regulate DSB activity. One could hypothesize that they play a direct role in activating or recruiting the Spo11/TopoVIBL complex for DSB formation. The hypothesis that these proteins might regulate DSB formation through some interactions is also consistent with the findings that Rec114 overexpression inhibits DSB formation in S. cerevisiae (Bishop et al, 1999) and that altering Rec114 phosphorylation pattern can up-or down-regulate DSB levels (Carballo et al, 2013). It is possible that Rec114 and Mei4 have distinct roles because Spo11 non-covalent interaction with DSBs is Rec114-dependent but Mei4-independent (Prieler et al, 2005), and Spo11 self-interaction depends on Rec114 but not on Mei4 (Sasanuma et al, 2007). However, in Zea mays and A. thaliana, the Rec114 homologue (Phs1) seems not to be required for DSB formation (Pawlowski et al, 2004;Ronceret et al, 2009). Here, we performed a functional and molecular analysis to determine whether mouse REC114 is required for meiotic DSB formation, and whether it interacts directly with some of its candidate partners. Rec114-null mutant mice are deficient in meiotic DSB formation We analyzed mice carrying a null allele of Rec114. In the mutated allele (here named Rec114 − and registered as Rec114 tm1(KOMP)Wtsi ), exon 3 and 4 were deleted, and a lacZ-neomycin cassette with a splice acceptor site was inserted upstream of this deletion. This deletion includes the conserved motifs SSM3, 4, 5, and 6 (Kumar et al, 2010;Tesse et al, 2017) (Figs 1A and S1). We analyzed the cDNA expressed from testes of Rec114 −/− mice and showed that the mutant allele is transcribed ( Fig S2A). However, because of the presence of a splice acceptor site in the cassette (EnSA, Fig S1), in the cDNA of Rec114 −/− mice, exon 2 is fused to DNA sequences from the cassette, themselves fused to exon 5 and 6 but out of frame ( Fig S1C). This cDNA from Rec114 −/− mice thus encodes for a putative protein containing exon 1 and 2 and lacking all other exons. We conclude that this allele is likely to be a null mutant. We confirmed the absence of detectable REC114 protein in Rec114 −/− mice by Western blot analysis of total testes extracts and after REC114 immunoprecipitation (Figs 1B and S2B). Heterozygous (Rec114 +/− ) and homozygous (Rec114 −/− ) mutant mice were viable. We also generated from this allele, another mutant allele (named and registered as Rec114 del ) without the insertion cassette. We performed all subsequent analyses using mice with the Rec114 − allele unless otherwise stated, and confirmed several phenotypes in mice carrying the Rec114 del allele. To monitor the consequences of REC114 absence on gametogenesis, we performed histological analysis of testes and ovaries. Spermatogenesis was altered in Rec114 −/− adult male mice, as indicated by the presence of major defects in testis tubule development compared with wild-type (Rec114 +/+ ) mice ( Fig 1C). Specifically, in Rec114 −/− animals the tubule diameter was smaller and tubules lacked haploid cells (spermatids and spermatozoa). In these tubules, the most advanced cells were spermatocytes, although some were also depleted of spermatocytes. Testis weight was significantly lower in Rec114 −/− than wild-type mice ( Fig S2C). In ovaries from Rec114 −/− mice, oogenesis was significantly affected, as indicated by the strongly reduced number of primordial follicles at 2 wk postpartum and their near absence at 8 wk (Fig 1D and E). In ovaries from 2 wk old Rec114 −/− mice, a significant number of secondary follicles are detected and differentiated from the first wave of follicle growth. These follicles have not been subject to elimination as observed in other DSB-deficient mice (Di Giacomo et al, 2005). Consistent with these gametogenesis defects, Rec114 −/− males and females were sterile. Indeed, mating of wild-type C57BL/ 6 animals with Rec114 −/− males and females (n = 3/sex) crossed for 4 months yielded no progeny. phenotypes, indicating that the cassette present in the Rec114 − allele does not cause the observed meiotic defects (Fig S7). In vivo REC114 interacts with MEI4 and these proteins display a mutually dependent localization REC114, MEI4, and IHO1 co-localize on the axis of meiotic chromosomes, and IHO1 is needed for MEI4 loading (Stanzione et al, 2016). First, we tested whether IHO1 loading required REC114 and MEI4. This was clearly not the case because IHO1 localization was similar in wild type and in Rec114 −/− and Mei4 −/− spermatocytes (Figs 3A and S8A). This observation is consistent with a role for IHO1 in REC114 and MEI4 recruitment. We then tested whether MEI4 and REC114 regulated each other's localization. MEI4 forms 200-300 foci on meiotic chromosome axes at leptonema. Then, the focus number progressively decreases as cells progress into zygonema, and MEI4 becomes undetectable at pachynema. This decrease of MEI4 foci during meiotic progression is directly correlated with synapsis formation (MEI4 foci are specifically depleted from synapsed axes) and with DSB repair (MEI4 foci are excluded from DMC1 foci) (Kumar et al, 2010). At leptonema, the number of axis-associated MEI4 foci was reduced 3-to 4-fold in Rec114 −/− spermatocytes and oocytes (Figs 3B and C, S8B, and S9A), and their intensity was significantly decreased (by 1.75-fold in spermatocytes and by 1.9-fold in oocytes) compared with wild-type controls (Fig S9B). The MEI4 signal detected in Rec114 −/− gametocytes was higher than the nonspecific background signal observed in Mei4 −/− spermatocytes (Fig 3C). This suggests that REC114 contributes to, but it is not essential for MEI4 focus formation on meiotic chromosome axis. REC114 foci co-localize with MEI4 and, like MEI4 foci, their number is highest at leptonema and then progressively decreases upon synapsis (Stanzione et al, 2016). We thus tested whether REC114 foci required MEI4 for axis localization. At leptonema, few axis-associated REC114 foci above the background signal could be detected in Mei4 −/− spermatocytes, where their number was reduced by more than 10-fold compared with wild-type cells (Fig 4A and B). However, this low level of REC114 foci in Mei4 −/− was still significantly higher compared with the number in Rec114 −/− gametocytes (Fig 4B). REC114 foci were not reduced in Spo11 −/− mice (Fig 4B), as previously reported for MEI4 foci (Kumar et al, 2010). This indicates that these proteins are loaded on the chromosome axis independently of SPO11 activity. Overall, MEI4 and REC114 are reciprocally required for their localization. MEI4 and REC114 co-localization, their mutual dependency for robust localization, and their interaction in yeast two-hybrid assays (Kumar et al, 2010) strongly suggested that these two proteins interact directly or indirectly in vivo. Indeed, we could detect REC114 after immunoprecipitation of MEI4 in extracts from wild-type and Spo11 −/− mice (Fig 4C). These assays were performed using protein extracts from 14 dpp mice where cellular composition in the testes are similar in wild type and mutants deficient for DSB formation. In this assay, we observed that the amount of MEI4 recovered after immunoprecipitation with the anti-MEI4 antibody is reduced in extracts from Rec114 −/− . This observation may indicate a destabilization of MEI4 in the absence of REC114. In principle, this could also be analyzed by detecting proteins from a total extract, but MEI4 is not detectable in such extracts in our conditions. As REC114 was detected in total protein extracts, we could observe that the REC114 level is not altered in the absence of MEI4 (Figs 1 and S9C). Interestingly, IHO1 also was immunoprecipitated by the anti-MEI4 antibody suggesting that MEI4 interacts with both IHO1 and REC114 (Fig 4C). These three proteins could be a part of the same complex, or form two independent complexes. Although MEI4 and IHO1 did not interact in a yeast two-hybrid assay (Stanzione et al, 2016), the detection of IHO1 after immunoprecipitation of MEI4 in Rec114 −/− extracts ( Fig 4C) suggests a direct or indirect interaction between IHO1 and MEI4 in mouse spermatocytes. We note that after immunoprecipitation of MEI4, the level of IHO1 is not reduced in Rec114 −/− compared with the wild type, whereas the level of MEI4 is. Formally, this could be explained by an excess of MEI4 relative to the IHO1 protein available for interaction with MEI4. Alternatively, changes in protein complex structure in the absence of REC114 could lead to modification of interactions and their recovery in the conditions used for immunoprecipitation. REC114 and MEI4 form a stable complex These in vivo assays suggested that REC114 and MEI4 directly interact. To test whether these two proteins interact and form a stable complex in vitro, we produced recombinant full-length REC114 in bacteria (Fig 5). However, we could not produce recombinant fulllength MEI4 or its N-terminal or C-terminal domains alone. Conversely, when co-expressed with REC114, the N-terminal fragment (1-127) of MEI4 was soluble and could be co-purified with REC114 on Strep-Tactin resin (Fig 5A, lane 4), providing the first biochemical evidence of a direct interaction between REC114 and MEI4. To identify the REC114 region that interacts with MEI4, we produced the N-terminal domain and a C-terminal fragment (residues 203-254) of REC114 and found that the REC114 C-terminal region (but not the N-terminal domain) was sufficient for binding to MEI4 (Figs 5A, lanes 5, 6 and S10). Finally, we could purify the MEI4 (1-127) and REC114 (203-254) complex and show that the two proteins co-eluted as a single peak from the Superdex 200 gel filtration column (Fig 5B and C). Rec114 contains a pleckstrin homology (PH) domain To gain insights into the structure of mouse REC114, we produced the full-length protein in bacteria. Then, using limited trypsin proteolysis, we identified a stable fragment (residues 15-159) that was suitable for structural analysis. We determined the crystal structure of this REC114 N-terminal region at a resolution of 2.5Å by SAD using the selenomethionine (SeMet)-substituted protein. The final model, refined to an R free of 30% and R-factor of 25% included residues 15-150 (Table 1). Unexpectedly, the structure revealed that REC114 (15-150) forms a PH domain, with two perpendicular antiparallel β-sheets followed by a C-terminal helix (Fig 6). Several residues are disordered in loops between the β strands. In the SeMet protein dataset that we solved at 2.7Å, the crystallographic asymmetric unit contained two REC114 molecules, but the position of β2 that packs against β1 in one of the molecules was shifted by three residues. A Protein Data Bank search using the PDBeFold server at EBI revealed that REC114 (15-150) was highly similar to the other PH domains and that the N-terminal domain of the CARM1 arginine methyltransferase (PDB code 2OQB) was the closest homologue ( Fig S11). Mapping the conserved residues to the protein surface revealed that both β-sheets contained exposed conserved residues that could be involved in protein interactions with REC114 partners ( Fig S12). In the crystal, the PH domain formed extensive crystallographic contacts with a symmetry-related molecule. Indeed, this interface, judged as significant using the PDBePisa server, buried a surface of 764Å 2 and included several salt bridges and hydrogen bonds formed by the well-conserved Arg98 of β6, and Glu130 and Gln137 of α1 (Fig S13A). To test whether REC114 dimerized in solution, we analyzed the PH domain by size-exclusion chromatographymultiple angle laser light scattering (SEC-MALLS) that allows measuring the molecular weight. Although the monomer molecular mass of the fragment was 16 kD (32 kD for a dimer), the MALLS data indicated a molecular weight of 24.7 kD for the sample at the concentration of 10 mg/ml. When injected at lower concentrations, the protein eluted later and the molecular weight diminished (21 kD at 5 mg/ml) (Fig S13B). These results could be explained by a concentration-dependent dimerization of the PH domain with a fast exchange rate between monomers and dimers that co-purify together during SEC. The validation of the monomer interface and the significance of this putative dimerization will be interesting to evaluate. Discussion Previous studies in yeast have shown that the putative complex involving S. cerevisiae Rec114, Mei4, and Mer2 is essential for meiotic DSB formation. Their transient localization at the DSB sites (observed by ChIP) suggests that, in yeast, this complex may play a direct role in promoting DSB activity (Sasanuma et al, 2007(Sasanuma et al, , 2008Panizza et al, 2011;Carballo et al, 2013). Several studies have shown the evolutionary conservation of these three partners. In mammals, MEI4 and IHO1 (the Mei4 and Mer2 orthologs, respectively) are required for meiotic DSB formation (Kumar et al, 2010;Stanzione et al, 2016). Here, we show that REC114 function in the formation of meiotic DSBs is conserved in the mouse. Moreover, we provide the first direct evidence of the interaction between REC114 and MEI4 and identified a potential interaction domain in REC114 that includes previously identified conserved motifs. Properties of REC114 Our study revealed that REC114 N-terminus is a PH domain that is composed of two sets of perpendicular anti-parallel β-sheets followed by an α helix. This domain is present in a large family of proteins with diverse biological functions and is mostly involved in targeting proteins to a specific site and/or in protein interactions. A subset of these proteins interacts with phosphoinositide phosphates (Lietzke et al, 2000;Lemmon, 2003). Several conserved positively charged residues in the two β sheets, β1 and β2, important for the interaction are not present in REC114. However, subsequent studies revealed interactions between the PH domain and a variety of different partners, in some cases by binding to phosphotyrosine-containing proteins or to polyproline (Scheffzek & Welti, 2012). Therefore, the REC114 PH domain could be a platform for several interactions, some of which could involve phosphorylated serine or threonine residues because it has been shown that ATR/ATM signaling through phosphorylation of downstream proteins regulate meiotic DSB activity (Joyce et al, 2011;Lange et al, 2011;Zhang et al, 2011;Carballo et al, 2013;Cooper et al, 2014). In terms of conservation of the REC114 primary sequence, it is remarkable that most of the previously described conserved motifs (SSM1 to 6) are structural elements (β sheets and α helices) within this PH domain and are readily identified in many eukaryotes (Kumar et al, 2010;Tesse et al, 2017). At the REC114 C-terminus, the SSM7 motif overlaps with a predicted α helical structure and is less well conserved. Moreover, its presence remains to be established in several species (Tesse et al, 2017). In this study, we demonstrated that this C-terminal domain directly interacts with MEI4, suggesting that this SSM7 region is evolutionarily conserved. The N-terminal domain of MEI4 that interacts with REC114 has a predicted α helical structure and includes two conserved motifs (Kumar et al, 2010). Interaction of REC114 with the chromosome axis A previous study showed that IHO1 is required for MEI4 and REC114 focus formation on axis and that it directly interacts with REC114 by two-hybrid assay (Stanzione et al, 2016). As IHO1 interacts with HORMAD1, IHO1 could act as a platform to recruit REC114 and MEI4. Such a mechanism would be similar the one identified in S. cerevisiae for the recruitment of Rec114 and Mei4 by Mer2 (Henderson et al, 2006;Panizza et al, 2011). In agreement with this hypothesis, IHO1 association with the chromosome axis is not altered in the absence of MEI4 or REC114, similar to that observed in S. cerevisiae (Panizza et al, 2011). Therefore, IHO1 could recruit REC114 by direct interaction, and this should allow MEI4 recruitment. Alternatively, we suggest a mechanism where REC114/MEI4 would be recruited as a complex to the axis as we observed a mutual dependency between these two proteins for their axis localization: the formation of REC114 axis-associated foci is strongly reduced in the absence of MEI4 and reciprocally. The residual REC114 foci detected in the absence of MEI4 do not appear to be able to promote DSB formation as DSB repair foci are abolished in Mei4 −/− mice similar to Spo11 −/− mice, and thus, suggesting an active role for the REC114/MEI4 complex. MEI4 may also be able to interact (directly or indirectly) with IHO1 or with axis proteins independently from REC114 at least in an Rec114 −/− genetic background as weak MEI4 axis-associated foci were observed in Rec114 −/− spermatocytes and oocytes and because IHO1 protein was detected upon immunoprecipitation of MEI4 in Rec114 −/− spermatocyte extracts. The details of these interactions and their dynamics during early meiotic prophase remain to be analyzed. Overall, the IHO1/MEI4/REC114 complex is expected to be the main component for the control of SPO11/TOPOVIBL catalytic activity. It may be essential for turning on and off the catalytic activity. In S. cerevisiae, it has been proposed that the local control of meiotic DSB formation is constrained by the chromatin loop organization and involves Tel1 (ATM) (Garcia et al, 2015) and possibly also the Mer2/Mei4/Rec114 (IHO1/MEI4/REC114) complex. Indeed, S. cerevisiae Rec114 shows that Tel1/Mec1-dependent phosphorylation is associated with down-regulation of DSB activity (Carballo et al, 2013). The IHO1/MEI4/REC114 complex could be a limiting factor for DSB formation. In agreement, we noted that the number of cytologically detectable foci is of the same order (about 200) as the number of DSB events measured by the detection of DSB repair proteins. The shutting off of DSB formation that correlates with synapsis between homologues (Thacker et al, 2014) could be the direct consequence of the removal of the Hop1 (or HORMAD1 in mice) axis protein, resulting in the displacement of the Mer2/Mei4/ Rec114 (IHO1/MEI4/REC114 in mice) complex from the axis. Additional studies on the protein-protein interactions and posttranslational modifications will help to understand these important steps for the regulation of meiotic DSB formation. Mouse strains The non-conditional Rec114 mutant allele is referenced as 2410076I21Rik tm1(KOMP)Wtsi and named Rec114 −/− in this study. The Rec114 del allele, in which the inserted lacZ-Neo cassette was deleted, was obtained by expression of Flp in mice carrying Rec114 −/− . These mice are in the C57BL/6 background. The Mei4 −/− , Spo11 −/− , and Spo11 YF/YF strains were previously described (Baudat et al, 2000;Kumar et al, 2010;Carofiglio et al, 2013). All animal Residues that are 100% conserved are in solid green boxes. The secondary structures of REC114 are shown above the sequences. The previously identified conserved motives, SSM1 to SSM6 (Kumar et al, 2010;Tesse et al, 2017) experiments were carried out according to the CNRS guidelines and approved by the ethics committee on live animals (project CE-LR-0812 and 1295). cDNA analysis Total RNA was extracted from two testes of young (14 dpp) wild type or Rec114 −/− mice using the GeneJet RNA Purification Kit (Thermo Fisher Scientific) according to the manufacturer's instructions. One μg of total RNA was treated with RQ1 RNAse-free DNase (Promega) 30 min at 37°C to degrade genomic DNA. 1 μg of total RNA was used for cDNA synthesis, using random primers and the Transcriptor First Strand cDNA Synthesis Kit (Roche) according to the manufacturer's instructions. Each PCR was performed using 0.05 μl of cDNA and KAPA Taq PCR kit (Sigma-Aldrich). Oligonucleotides sequences are provided in the Table S1. PCR cycling conditions were 3 min at 95°C, 35 cycles for 16U22/232L22 and 695U22/806L25 or 40 cycles for 16U21/644L22 and 16U21/688L24 with 15 s at 95°C, annealing for 15 s at 53°C, 51°C, 64°C, and 59°C for 16U22/232L22, 695U22/806L25, 16U21/644L22, and 16U21/688L24, respectively, and extension for 30 s at 72°C, followed by 5 min at 72°C at the end of cycles. Amplified products were separated on 2% agarose gels. PCR products obtained from the oligonucleotide combinations 16U21/644L22 or 16U21/688L24 were run in a gel and the most abundant product purified and sequenced. Histology and cytology Testes and ovaries were fixed in Bouin's solution (Sigma-Aldrich) at room temperature overnight and for 5 h, respectively. After dehydration and embedding in paraffin, 3-μm sections were prepared and stained with periodic acid-Schiff for testes and with hematoxylin and eosin for ovaries. Image processing and analysis were carried out with the NDP.view2 software (Hamamatsu). Spermatocyte and oocyte chromosome spreads were prepared by the dry-down method (Peters et al, 1997). Image analysis γH2AX was quantified using Cell profiler 2.2.0. The total pixel intensity per nucleus was quantified. The intensity of MEI4 foci was the mean pixel value within a focus. Axis-associated MEI4 foci were determined by co-labelling with SYCP3. Protein analysis Whole testis protein extracts were prepared as described in Stanzione et al (2016). For MEI4 immunoprecipitation, 3 μg of guinea pig anti-MEI4 antibody (Stanzione et al, 2016) was cross-linked to 1.5 mg of Dynabeads Protein A (Invitrogen) with disuccinimidyl suberate using the Crosslink Magnetic IP/Co-IP Kit (Pierce; Thermo Fisher Scientific). 3.6 mg of testis protein extract (from 14 dpp mice) was incubated with the cross-linked antibody at 4°C overnight. Beads were washed five times with washing buffer (20 mM Tris-HCl, pH 7.5, 0.05% NP-40, 0.1% Tween-20, 10% glycerol, 150 mM NaCl). Immunoprecipitated material was eluted by incubating the beads with the elution buffer (pH 2) for 5 min, and neutralized with the neutralization buffer (pH 8.5) (both buffers provided with the kit). The eluates were incubated with Laemmli loading buffer (1× final) at RT for 10 min, and divided in three aliquots, adding 10 mM DTT to one of them (for REC114 detection), followed by incubation at 95°C for 5 min. SPO11 immunoprecipitations on protein extracts from adult mice were performed as described (Pan & Keeney, 2009). Rabbit polyclonal anti-SPO11 was used for detection (Carofiglio et al, 2013). Protein expression, purification, and crystallization Mouse REC114 (15-159) fused to His-tag was expressed in E. coli BL21-Gold (DE3) (Agilent) from the pProEXHTb expression vector (Invitrogen). The protein was first purified by affinity chromatography using Ni 2+ resin. After His-tag cleavage with the TEV protease, the protein was further purified through a second Ni 2+ column and by size-exclusion chromatography. Pure protein was concentrated to 10 mg⋅ml −1 in buffer (20 mM Tris, pH 7.0, 200 mM NaCl, and 5 mM mercaptoethanol). The best-diffracting crystals grew within 1 wk at 20°C in a solution containing 0.25 M ammonium sulfate, 0.1 M MES, pH 6.5, and 28% PEG 5000 MME. For data collection at 100 K, crystals were snap-frozen in liquid nitrogen with a solution containing mother liquor and 25% (vol/vol) glycerol. SeMet-substituted REC114 was produced in E. coli BL21-Gold (DE3) and a defined medium containing 50 mg⋅l −1 of SeMet. SeMet REC114 was purified and crystallized as for the native protein. Data collection and structure determination Crystals of REC114 (15-159) belong to the space group P6 1 22 with the unit cell dimensions a, b = 107.5Å and c = 82.8Å. The asymmetric unit contains one molecule and has a solvent content of 71%. A complete native dataset was collected to a resolution of 2.5Å by the autonomous ESRF beamline MASSIF-1 (Bowler et al, 2015). The SeMet REC114 crystallized in the same conditions in the space group P4 2 2 1 2 and contained two molecules per asymmetric unit. A complete SeMet dataset was collected to a resolution of 2.7Å at the peak wavelength of the Se K-edge on the ID23-1 beamline at the ESRF. Data were processed using XDS (Kabsch, 2010). The structure was solved using SeMet SAD data. Selenium sites were identified, refined, and used for phasing in AUTOSHARP (Bricogne et al, 2003). The model was partially built with BUCCANEER (Cowtan, 2006), completed manually in COOT (Emsley et al, 2010) and refined with REFMAC5 (Murshudov et al, 1997). The model was used for molecular replacement to determine the structure using the native dataset and PHASER (McCoy et al, 2007). The native structure was finalized in COOT and refined with REFMAC5 to a final R-factor of 25% and R free of 30% (Table 1) with all residues in the allowed regions (96% in favored regions) of the Ramachandran plot, as analyzed using MOLPROBITY (Chen et al, 2010). Strep-tag pull-down assays MEI4 (1-127) was cloned in the pProEXHTb expression vector to produce a His-tag fusion protein. REC114 and its deletion mutants were cloned in the pRSFDuet-1 vector as Strep-tag fusion proteins. REC114 variants alone or co-expressed with MEI4 were purified using a Strep-Tactin XT resin (IBA). The resin was extensively washed with a buffer containing 20 mM Tris, pH 7.0, 200 mM NaCl, and 5 mM mercaptoethanol, and bound proteins were eluted with the same buffer containing 50 mM biotin and analyzed by 15% SDS-PAGE. The minimal REC114-MEI4 complex was then purified using the Strep-Tactin XT resin. The His-tag of MEI4 was removed with the TEV protease and a passage through a Ni 2+ column. The complex was then purified by size-exclusion chromatography.
v3-fos-license
2018-12-09T21:40:50.978Z
2018-09-03T00:00:00.000
55999361
{ "extfieldsofstudy": [ "Economics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2071-1050/10/9/3146/pdf?version=1535988214", "pdf_hash": "23978773ff138068ae7512a77d17c2c0905da9d7", "pdf_src": "ScienceParseMerged", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:119", "s2fieldsofstudy": [ "Environmental Science" ], "sha1": "23978773ff138068ae7512a77d17c2c0905da9d7", "year": 2018 }
pes2o/s2orc
Impacting Factors and Temporal and Spatial Differentiation of Land Subsidence in Shanghai This paper uses Grey Correlation Degree Analysis (GCDA) to obtain and compare the relationships between major impacting factors and land subsidence, and finds the spatial characteristics of subsidence in the urban centre by Exploratory Spatial Data Analysis (ESDA). The results show the following: (1) Annual ground subsidence in Shanghai has occurred in four stages: slow growth in the 1980s, rapid growth in the 1990s, gradual decline in the first decade of the 21st century, and steady development currently. (2) In general, natural impact factors on land subsidence are more significant than social factors. Sea-level rise has the most impact among the natural factors, and permanent residents have the most impact among the social factors. (3) The average annual subsidence of the urban centre has undergone the following stages: “weak spatial autocorrelation”→ “strong spatial autocorrelation”→ “weak spatial autocorrelation”. (4) The “high clustering” spatial pattern in 1978 gradually disintegrated. There has been no obvious spatial clustering since 2000, and the spatial distribution of subsidence tends to be discrete and random. Introduction Land subsidence is the downward displacement of a land surface relative to a certain reference surface, such as mean sea level (MSL) or a reference ellipsoid [1].Subsidence is mainly caused by the compression of unconsolidated strata, which is due to natural and/or human activities.With a wide impact and long duration, land subsidence is a cumulative, uncompensated permanent loss of environments and resources [2,3].Subsidence negatively impacts the living environment of human beings, regional ecological security, and the sustainable development of a region.Land subsidence has occurred in more than 50 countries and regions since it was first discovered in Niigata, Japan in 1898 [4].There are many countries with substantial land subsidence, such as Japan, the United States, Mexico, Italy, Thailand, and China.In China, land subsidence was first discovered in Shanghai in 1921 [5].According to the estimated result of Zhang et al. (2003), land subsidence caused direct and indirect economic losses of 294.307 billion yuan RMB (35.56 billion USD) in Shanghai from 1921 to 2000, and this number has reached approximately 24.57 billion yuan RMB (2.97 billion USD) from 2001 to 2003 [6].Shanghai is the economic and financial centre of China and is important for local and national economic development.The land subsidence in Shanghai causes frequent damages of municipal infrastructure and impedes the construction and resource development activities, and is responsible for huge economic and ecological losses.It is the most serious geological disaster that Shanghai is facing at the present stage, and one of the serious obstacles for Shanghai to achieve sustainable development.Therefore, studying the impacting factors and temporal and spatial distribution patterns of land subsidence in Shanghai is important. As a global, ubiquitous, and long-standing geological problem, land subsidence has attracted worldwide attention.Many studies all over the world pay attention to the causes of land subsidence, monitoring techniques, disaster assessments, and governance measures.Rahman et al. (2018) explored the causes of land subsidence along the coast of Jakarta and found that erection weight and groundwater exploitation are the main causes in the region [7].Ma et al. (2011) found that high-rise buildings and over-exploitation of groundwater are the main causes in Tanggu [8].Xu et al. (2016) proposed four main causes in the urban centre of Shanghai: additional loading caused by the construction of infrastructure, the cut-off effect due to construction in aquifers, drawdown of the groundwater level caused by leakage into underground structures, and the decrease in groundwater recharge from neighbouring zones [9].For monitoring technologies, the current methods for land subsidence include levelling methods, trigonometric elevation methods, digital photogrammetry methods, InSAR methods, ground subsidence monitoring stations, and groundwater monitoring [10,11].Sun et al. (2017) used the multi-track PS-InSAR technique to monitor land subsidence [12].Mateos et al. (2017) integrated detailed geological and hydrogeological information with PSInSAR (persistent scatterer interferomtric synthetic aperture radar) data to analyse the land subsidence process in the Vega de Granada in Spain [13].Liu et al. (2011) found that D-InSAR accurately monitors the deformation of a large area [14].In terms of disaster assessment, Chen (2016) put forward a conceptual framework for the development of an indicator system for the assessment of regional land subsidence disaster vulnerability [15].Abidin et al. (2013) concentrated on the roles of geospatial information for risk assessment of land subsidence [1].Bhattarai and Kondoh (2017) accomplished a comprehensive risk assessment of land subsidence in Kathmandu Valley [16].In terms of governance measures, Takagi (2018) studied the effectiveness and limitations of coastal embankments in controlling land subsidence in Jakarta [17].Sun et al. (2011) proposed policy recommendations to control coastal land subsidence in Taiwan [18]. In general, most of the existing studies show the following features: (1) The majority of studies mainly consider natural factors.(2) They commonly adopted methods of traditional mathematical statistics, which had some limitations in chronological and spatial analysis.(3) Most studies with spatial analysis only described spatial distribution in a simple qualitative manner rather than studying spatial information in detail using quantitative methods. This paper integrates natural factors and socio-economic factors to analyse the dynamic mechanism of land subsidence because a city is a compound natural-social-economic system.The impacting factors selected in this paper are shown in Table 1.The reasons for these factors are selected as follows: (1) Over-exploitation of groundwater reduces the buoyancy of the underground aquifer [19,20] and changes the stress of the stratum.Both of these reactions cause land subsidence [21][22][23][24] by affecting the consolidation of the soil.Related studies found that groundwater exploitation is one of the major causes of land subsidence in Shanghai [4].(2) Sea-level rise is an increase in global mean sea level as a result of an increase in the volume of water in the oceans.Sea-level rise is usually attributed to global climate change by thermal expansion of ocean water and melting of ice sheets and glaciers [25].It worsens the land subsidence severely.(3) After economic reform and open up, rapid urbanization resulted in a large increase in permanent residents [26] and human activities, which negatively impacted land subsidence.(4) The urban industry has developed rapidly since the 1980s.Undoubtedly, the fast growth of GDP has contributed to dramatic changes in the natural environment [27] and the large consumption of water, natural gas, petroleum, and other resources and energy which are extracted from the ground [28].Mining activities destabilize stratums, so they have a bad impact on land subsidence [29].(5) Transportation is an important link that connects regions and transports people and logistics for urban development [30].The increasing load on the road made the land subsidence more serious by promoting consolidation and compression of topsoil [31,32].(6) Relevant studies show that the number of high-rise buildings has become a new cause of land subsidence in the process of urbanization [33].(7) Rapid expansion of high-rise building area has become one of the main causes of the seriously increasing land subsidence in Shanghai [34].Compared with the previous studies, innovations of the study are as follows: (1) It supplements social factors and compares them with natural factors [35].(2) This paper expands the new perspective by analysing annual growth (i.e., the incremental effect) of impacting factors while most of the existing researches [7][8][9] analyse the influence of statistics (i.e., the scale effect in the paper) on the land subsidence.(3) GCDA is used in the study instead of mathematical statistics, because of in the analysis of long-term variable data, the GCDA results in an improved integrity and dynamic than those of the traditional mathematical statistics [36].(4) ESDA was used in the spatial analysis of land subsidence in Shanghai to describe and analyse its spatial characteristics quantitatively in detail. Overview of the Study Area Shanghai is located in the Yangtze River Delta on the south edge of the estuary of the Yangtze on the East Chinese coast.Shanghai faces Kyushu Island with the East China Sea in between.It borders Hangzhou Bay in the south, Jiangsu Province in the north, and Zhejiang in the west [37].Shanghai is one of the four direct-controlled municipalities of China and one of the most populous cities in the world with a population of more than 24 million in 2016.The city is a global financial centre [38] and transport hub with the busiest container port in the world [39].Shanghai consists of 16 districts of which there are seven districts in the central city and nine districts in the suburbs (Figure 1). Data Source The Shanghai land subsidence data and groundwater exploitation data used in this study are provided by the Shanghai Environmental Geological Bulletin.The land subsidence raster data with a resolution of 30 m × 30 m is obtained by interpolating the Shanghai land subsidence contour maps.The permanent residents, GDP (Gross Domestic Product), and number and area of high-rise buildings are provided by the Shanghai Statistical Yearbook.Sea-level data is collected based on historical tidal level data from the Shanghai tide gauge station shared by the Permanent Service for Mean Sea Level (http://www.psmsl.org/).The civil vehicles data is from the Qianzhan Database (https://d.qianzhan.com/xdata/details/a004230075771a68.html). (1) Groundwater exploitation The first deep well was drilled in Shanghai in 1960, which was when the city began to systematically extract groundwater to provide water for human activities.In addition, rapid industrial development in the 1980s led to a rapid increase in groundwater exploitation, though it has been controlled since 1965.Subsequently, Shanghai recharged underground aquifers with tap water to raise the groundwater level and restore soil elasticity.Overall, after economic reform and open up of China, the exploitation of groundwater in Shanghai increased year by year in the 1980s, increased first and then decreased in the 1990s, and then rapidly decreased after 2000 (Figure 2). Data Source The Shanghai land subsidence data and groundwater exploitation data used in this study are provided by the Shanghai Environmental Geological Bulletin.The land subsidence raster data with a resolution of 30 m × 30 m is obtained by interpolating the Shanghai land subsidence contour maps.The permanent residents, GDP (Gross Domestic Product), and number and area of high-rise buildings are provided by the Shanghai Statistical Yearbook.Sea-level data is collected based on historical tidal level data from the Shanghai tide gauge station shared by the Permanent Service for Mean Sea Level (http://www.psmsl.org/).The civil vehicles data is from the Qianzhan Database (https://d.qianzhan.com/xdata/details/a004230075771a68.html). (1) Groundwater exploitation The first deep well was drilled in Shanghai in 1960, which was when the city began to systematically extract groundwater to provide water for human activities.In addition, rapid industrial development in the 1980s led to a rapid increase in groundwater exploitation, though it has been controlled since 1965.Subsequently, Shanghai recharged underground aquifers with tap water to raise the groundwater level and restore soil elasticity.Overall, after economic reform and open up of China, the exploitation of groundwater in Shanghai increased year by year in the 1980s, increased first and then decreased in the 1990s, and then rapidly decreased after 2000 (Figure 2).(2) Sea-level rise Monitoring data shows that the global sea level has risen by 10-20 centimetres over the past 100 years, and it will rise faster in the future [40].Obviously, irreversible sea-level rise and serious land subsidence will severely threaten urban development on account of the average ground elevation, which is only slightly higher than sea level in Shanghai [41,42].According to Figure 3 (3) Permanent resident growth The number of permanent residents in Shanghai increased by 2.07 million, with an annual average increase of 186,800 from 1978 to 1989.The permanent residents increased by 2.75 million, reaching an annual average growth of 270,900 from 1990 to 2000 (Table 2).Moreover, the population has increased sharply since 2000 to 23.03 million at the end of 2010 and 24.20 million in 2016.(2) Sea-level rise Monitoring data shows that the global sea level has risen by 10-20 centimetres over the past 100 years, and it will rise faster in the future [40].Obviously, irreversible sea-level rise and serious land subsidence will severely threaten urban development on account of the average ground elevation, which is only slightly higher than sea level in Shanghai [41,42].According to Figure 3 (2) Sea-level rise Monitoring data shows that the global sea level has risen by 10-20 centimetres over the past 100 years, and it will rise faster in the future [40].Obviously, irreversible sea-level rise and serious land subsidence will severely threaten urban development on account of the average ground elevation, which is only slightly higher than sea level in Shanghai [41,42].According to Figure 3, sea level has been rising with fluctuations since 1978.Sea level rose 96.92 mm from 1978 to 1989, 48.75 mm from 1990 to 2000, and 89.17 mm from 2011 to 2016.Furthermore, sea level was generally stable from 2001 to 2010.(3) Permanent resident growth The number of permanent residents in Shanghai increased by 2.07 million, with an annual average increase of 186,800 from 1978 to 1989.The permanent residents increased by 2.75 million, reaching an annual average growth of 270,900 from 1990 to 2000 (Table 2).Moreover, the population has increased sharply since 2000 to 23.03 million at the end of 2010 and 24.20 million in 2016.(3) Permanent resident growth The number of permanent residents in Shanghai increased by 2.07 million, with an annual average increase of 186,800 from 1978 to 1989.The permanent residents increased by 2.75 million, reaching an annual average growth of 270,900 from 1990 to 2000 (Table 2).Moreover, the population has increased sharply since 2000 to 23.03 million at the end of 2010 and 24.20 million in 2016.(4) GDP growth According to statistics and calculations (Table 3), the GDP of 1989 was 2.6 times that of 1978, and by 2000, the GDP increased to 6.1 times that of 1990.Then, GDP growth slowed down.In 2010, the GDP was 3.3 times that of 2001, and in 2016, the GDP was 1.4 times that of 2012.( 5) Increase of the civil vehicle number This study mainly focuses on civil vehicles because there is a large gap between the number of civil vehicles and the number of public transport vehicles in Shanghai.After economic reform and open up, the number of civil vehicles in Shanghai increased exponentially due to the development of the national economy and the improvement of human living standards (Figure 4).According to statistics and calculations (Table 3), the GDP of 1989 was 2.6 times that of 1978, and by 2000, the GDP increased to 6.1 times that of 1990.Then, GDP growth slowed down.In 2010, the GDP was 3.3 times that of 2001, and in 2016, the GDP was 1.4 times that of 2012.This study mainly focuses on civil vehicles because there is a large gap between the number of civil vehicles and the number of public transport vehicles in Shanghai.After economic reform and open up, the number of civil vehicles in Shanghai increased exponentially due to the development of the national economy and the improvement of human living standards (Figure 4).(6) Increasing number of high-rise buildings A high-rise building is a tall construction compared to a low-rise building and is defined by its height differently in various jurisdictions.Emporis Standards defines a high-rise building as "A multi-floor structure between 35-100 m tall, or a building of unknown height from 12-39 floors" [43].In the U.S., the National Fire Protection Association defines a high-rise building as being higher than 75 feet (23 m) or approximately 7 floors [44].Most building engineers, inspectors, architects, and similar professionals define a high-rise building as a building that is at least 75 feet (23 m) tall [43].Buildings with 8 floors or more are defined as high-rise buildings in this paper, considering the average floor height is approximately 3 m [45].The large number of high-rise buildings in Shanghai has been increasing rapidly since the 21st century according to Figure 4, worsens the land subsidence. (7) Expansion of high-rise building area In the 1980s, the high-rise building area in Shanghai increased by nearly 5.22 km 2 .In the 1990s, 52.66 km 2 was added, and the total number was up to 61.80 km 2 (6) Increasing number of high-rise buildings A high-rise building is a tall construction compared to a low-rise building and is defined by its height differently in various jurisdictions.Emporis Standards defines a high-rise building as "A multi-floor structure between 35-100 m tall, or a building of unknown height from 12-39 floors" [43].In the U.S., the National Fire Protection Association defines a high-rise building as being higher than 75 feet (23 m) or approximately 7 floors [44].Most building engineers, inspectors, architects, and similar professionals define a high-rise building as a building that is at least 75 feet (23 m) tall [43].Buildings with 8 floors or more are defined as high-rise buildings in this paper, considering the average floor height is approximately 3 m [45].The large number of high-rise buildings in Shanghai has been increasing rapidly since the 21st century according to Figure 4, worsens the land subsidence. (7) Expansion of high-rise building area In the 1980s, the high-rise building area in Shanghai increased by nearly 5.22 km 2 .In the 1990s, 52.66 km 2 was added, and the total number was up to 61.80 km 2 at the end of 2000.The area reached 219.11 km 2 in 2010 and 436.48 km 2 in 2016. Methodology First, we have identified several key time points according to the change in impacting factors since 1978 (Figure 5): 1989, 2000, and 2011.The factors were almost unchanged from 1978 to 1989, and began to grow slowly in 1989.They have grown rapidly since 2000.Year 2011 is very important as factors began to change obviously.For example, the number of civil vehicles reduced, the amount and area of high-rise building and GDP grew at a faster rate, the number of permanent residents began to stabilize.According to these key time points, we divided the land subsidence into four stages: 1978-1989, 1990-2000, 2001-2010, and 2011-2016.Second, Grey Correlation Degree Analysis (GCDA) of land subsidence and seven impacting factors at different stages in Shanghai from 1978 was carried out.According to the grey correlation degree, the influence of each factor was determined.Third, Exploratory Spatial Data Analysis (ESDA) was adopted, and Moran's I analysis and Getis-Ord General G analysis were used to explore the spatial autocorrelation and spatial distribution characteristics of the main subsidence area in Shanghai-the urban centre.Spatial autocorrelation refers to the potential interdependence of observations in the same distribution [46], and it is a kind of spatial correlation.was carried out.According to the grey correlation degree, the influence of each factor was determined.Third, Exploratory Spatial Data Analysis (ESDA) was adopted, and Moran's I analysis and Getis-Ord General G analysis were used to explore the spatial autocorrelation and spatial distribution characteristics of the main subsidence area in Shanghai-the urban centre.Spatial autocorrelation refers to the potential interdependence of observations in the same distribution [46], and it is a kind of spatial correlation.(1) Grey Correlation Degree Analysis (GCDA) In a complex system affected by multiple factors, the relationship between the factors is unknown [36].It is difficult to distinguish which factors are dominant and subordinate and which factors are closely related and unrelated.In the past, regression analysis, correlation analysis, variance analysis, principal component analysis, and other traditional statistical methods were commonly used [8,9].However, these methods have strict requirements on data volume and sample distribution, and there may be problems, such as the quantitative results being inconsistent with the qualitative results and failure of the standard statistical tests. The theory of grey systems [47] was first introduced in 1982 by Deng (1982).The basic idea [48] is to determine whether the relations between different data sequences are close based on the similarity of the geometry of the data sequence curves.With the application of the linear interpolation method, the discrete observations of system factors can be transformed into segmented continuous polylines, whose geometric characteristics reflect the correlation of the data sequences.The closer the geometry of the polyline is, the greater the correlation between the data sequences, and vice versa.Grey correlation analysis provides a quantified measure of the system's development and is very suitable for dynamic process analysis.There are few requirements for the sample size and distribution and no situation where the quantitative result is inconsistent with the qualitative analysis in grey correlation degree analysis.The formula is: (1) Grey Correlation Degree Analysis (GCDA) In a complex system affected by multiple factors, the relationship between the factors is unknown [36].It is difficult to distinguish which factors are dominant and subordinate and which factors are closely related and unrelated.In the past, regression analysis, correlation analysis, variance analysis, principal component analysis, and other traditional statistical methods were commonly used [8,9].However, these methods have strict requirements on data volume and sample distribution, and there may be problems, such as the quantitative results being inconsistent with the qualitative results and failure of the standard statistical tests. The theory of grey systems [47] was first introduced in 1982 by Deng (1982).The basic idea [48] is to determine whether the relations between different data sequences are close based on the similarity of the geometry of the data sequence curves.With the application of the linear interpolation method, the discrete observations of system factors can be transformed into segmented continuous polylines, whose geometric characteristics reflect the correlation of the data sequences.The closer the geometry of the polyline is, the greater the correlation between the data sequences, and vice versa.Grey correlation analysis provides a quantified measure of the system's development and is very suitable for dynamic process analysis.There are few requirements for the sample size and distribution and no situation where the quantitative result is inconsistent with the qualitative analysis in grey correlation degree analysis.The formula is: where γ(x 0 (k), x i (k)) is the grey correlation coefficient at point k, γ(X 0 , X i ) is the grey correlation degree between data sequences X 0 and X i , and ξ is the resolution coefficient, with a value between 0 and 1.The smaller ξ is, the greater the difference between the correlation coefficient values of the data sequences and the stronger the discrimination between the data sequences.The calculation process of the grey correlation degree is shown in Figure 6. where γ( (k), (k)) is the grey correlation coefficient at point k, γ( , ) is the grey correlation degree between data sequences and , and is the resolution coefficient, with a value between 0 and 1.The smaller ξ is, the greater the difference between the correlation coefficient values of the data sequences and the stronger the discrimination between the data sequences.The calculation process of the grey correlation degree is shown in Figure 6.Different data standardization methods will lead to different grey correlation degrees and deserve special attention.Common methods include mean normalization, initial value normalization, and zero-mean normalization (also called z-score normalization).In general, the initial value normalization is applicable to socio-economic data, because most of these sequences show a steady growth trend, and the initial value normalization makes the growth trend more obvious [49].Therefore, the initial value normalization is used in this study according to the statistical characteristics. (2) Exploratory Spatial Data Analysis (ESDA) Exploratory spatial data analysis (ESDA) is supported by spatial analysis, emphasizes the spatial correlation of events, focuses on the nature of spatial data, and explores the spatial patterns of data.ESDA includes global and local statistical analysis.In this paper, two global statistical analysis indexes, Moran's I and Getis-Ord General G [46], are used to carry out spatial autocorrelation and spatial clustering. (3) Spatial Autocorrelation Spatial autocorrelation refers to the potential interdependencies among observed data of some variables in the same distribution area.Tobler (1970) once pointed out "the first law of geography: everything is related to everything else, but near things are more related to each other" [50].Moran's I is a good indicator of spatial correlation and was proposed by Moran [51], an Australian statistician.Different data standardization methods will lead to different grey correlation degrees and deserve special attention.Common methods include mean normalization, initial value normalization, and zero-mean normalization (also called z-score normalization).In general, the initial value normalization is applicable to socio-economic data, because most of these sequences show a steady growth trend, and the initial value normalization makes the growth trend more obvious [49].Therefore, the initial value normalization is used in this study according to the statistical characteristics. (2) Exploratory Spatial Data Analysis (ESDA) Exploratory spatial data analysis (ESDA) is supported by spatial analysis, emphasizes the spatial correlation of events, focuses on the nature of spatial data, and explores the spatial patterns of data.ESDA includes global and local statistical analysis.In this paper, two global statistical analysis indexes, Moran's I and Getis-Ord General G [46], are used to carry out spatial autocorrelation and spatial clustering.Spatial autocorrelation refers to the potential interdependencies among observed data of some variables in the same distribution area.Tobler (1970) once pointed out "the first law of geography: everything is related to everything else, but near things are more related to each other" [50].Moran's I is a good indicator of spatial correlation and was proposed by Moran [51], an Australian statistician.Moran's I reflects the degree of similarity among attributes of regional units that adjoin or are adjacent to each other.Moran's I is a rational number, and after normalization of variance, the number is normalized between −1 and 1 and is defined as: where n is the number of special units indexed by i and j; x is the variable of interest; x is the mean of x; and w ij is a matrix of spatial weights.Moran's I > 0 means there is a positive spatial correlation between observations, and the larger the Moran's I value is, the more significant the correlation is.When Moran's I is close to 1, the observations gather in specific areas.In other words, similar observations (high or low) tend to agglomerate in space.Moran's I < 0 indicates that there is a spatially negative correlation between the observations, and the smaller Moran's I is, the greater the spatial difference among the observations.When Moran's I approaches −1, the observations follow a discrete spatial pattern, and similar observations tend to be distributed.Moran's I = 0 means that the observations are spatially random and there is no spatial correlation [52]. (4) Spatial Clustering Clustering is the grouping of observations according to a similar criterion, which maximizes the intra-group similarities and the differences among groups to discover meaningful structural features."High/low" spatial cluster analysis (also known as Getis-Ord General G analysis) determines which observations are clustered based on the possibility of data clustering.In the results of General G, high-high cluster shows that observations larger than the mean are spatially clustered, low-low cluster indicates that observations smaller than the mean are spatially clustered, and not significant means that the observations are not spatially clustered.The method was proposed by Ord and Getis [53].In this method, the z-score and p-value reflect statistical significance and determine whether to reject the null hypothesis which indicates that study objects are randomly distributed. The z-score is a multiple of the standard deviation.The higher (or lower) the z-score is, the more clustered the observations are.Z being positive and greater than the threshold indicates high-value clustering, z being negative and less than the threshold indicates low-value clustering, and z = 0 indicates no clustering of observations.The p-value is defined as the probability under the null hypothesis in which the spatial pattern of the observation is random.The null hypothesis is rejected if any of these probabilities is less than or equal to a small, fixed but arbitrarily pre-defined threshold value, which is commonly set to 0.10, 0.05, or 0.01 (Table 4). Development of Land Subsidence in Shanghai The land subsidence in Shanghai is severe.Land subsidence was noticed in the early 1920s and has a long history of more than 90 years [55].Compared with 1921, the ground in the urban centre has subsided by approximately 2 m, and the maximum subsidence is approximately 3 m (Figure 7).Land subsidence leads to serious water hazards in urban areas [56], poor inland navigation, frequent damages to municipal infrastructure, and other urban problems.Clearly, land subsidence has become one of the main restrictive factors for efficient and stable economic growth and sustainable development in Shanghai.The annual average land subsidence in Shanghai has distinct characteristics at different times: (1) The 1980s (1978-1989) were a period of slow growth, and the annual average land subsidence had an upward trend and a small increase.(2) Land subsidence increased rapidly in the 1990s (1990-2000), and the annual average increase was 4.09 times that in the 1980s.(3) Land subsidence was effectively controlled during the first decade of the 21st century (2001-2010), and the annual average subsidence gradually decreased.In fact, the subsidence decreased from 14.3 (mm/a) in the 1990s to 8.52 (mm/a), with a decrease of 40.42%.(4) The second decade of the 21st century (2011-2016) witnessed the stable development of land subsidence in Shanghai.Land subsidence decreased to 5.36 (mm/a), which was 37.01% lower than that of the previous stage and remained stable (Figure 8).Incidentally, only the statistics from 2011 to 2016 were analysed in the last stage because of the lack of subsidence data after 2016. 8.52 (mm/a), with a decrease of 40.42%.(4) The second decade of the 21st century (2011-2016) witnessed the stable development of land subsidence in Shanghai.Land subsidence decreased to 5.36 (mm/a), which was 37.01% lower than that of the previous stage and remained stable (Figure 8).Incidentally, only the statistics from 2011 to 2016 were analysed in the last stage because of the lack of subsidence data after 2016. Analysis of the Impacting Factors of Land Subsidence The results of GCDA between the annual average land subsidence and impacting factors, which consist of the annual average growth of groundwater exploitation, sea-level rise, permanent residents, GDP, civil vehicles, the number of high-rise buildings, and high-rise building area, are shown in Table 5.As the incremental effect shows in Table 5, (1) the impact of natural factors is more significant than that of social factors because the mean degree of natural factors is greater than that of social factors.(2) The increase in permanent residents is the most important cause of annual average land subsidence in Shanghai since it has the highest grey correlation degree of 0.8719.(3) The influence of annual average sea-level rise is positive and very strong, similar to that of groundwater exploitation on land subsidence.(4) High-rise buildings, both in quantity and area, deteriorate land subsidence.(5) The grey correlation degree between annual average GDP growth and land subsidence is 0.4655, so there is a small positive impact.(6) The increasing influence all factors have on land subsidence is as follows: permanent resident growth > sea-level rise > groundwater exploitation > increase of the civil vehicle number > increasing number of high-rise buildings > expansion of high-rise building area > GDP growth. In addition to the annual average growth of these factors, how does their total amount affect land subsidence?In this paper, an incremental effect is defined as the effect of the annual growth of impacting factors on land subsidence, and scale effect is defined as the effect of the total amount of impacting factors on land subsidence.Groundwater exploitation is excluded in the scale effect analysis because it is seriously affected by land subsidence measures such as groundwater recharge. As the scale effect shows in Table 6, (1) natural factors, mainly sea-level height, are proven to be the major cause of land subsidence.(2) The amount of the permanent residents still has a very large impact, which is slightly less than the sea-level height.(3) Civil vehicles are the third crucial influence factor, with a correlation degree of 0.5725, which is less than that of the incremental effect.(4) GDP, the number of high-rise buildings and high-rise building area have significant influences on cumulative land subsidence.(5) The level of impact of all factors is ordered from high to low as follows: Sea-level height, permanent residents, civil vehicles, GDP, number of high-rise buildings, and high-rise building area.In conclusion, the impact of natural factors on land subsidence in an urban centre is greater than that of social factors, both of which influence different aspects: (1) In terms of groundwater exploitation, the effect of increasing exploitation is significant, so strict control of groundwater exploitation is essential for land subsidence prevention.(2) There is a strong positive correlation between sea-level rise and land subsidence; thus, sea-level rise should be a focus.(3) Permanent residents have a large positive impact, and their scale effect is greater than their incremental effect.Therefore, the more than 24 million permanent residents, with an annual increase of nearly 200,000, have become a substantial threat to land subsidence in recent years.(4) Increase of the civil vehicle number is one of the most imperative causes of land subsidence in Shanghai because of both its incremental and scale effects. (5) All of the other social factors have significant positive impacts on land subsidence, including GDP, the number of high-rise buildings and high-rise building area.However, there are some differences between their incremental and scale effects, namely, the GDP has a larger scale effect, while the number and area of high-rise buildings have larger incremental effects. Spatial Autocorrelation of Land Subsidence in the Urban Centre The land subsidence in the urban centre has developed as "weak spatial autocorrelation" → "strong spatial autocorrelation" → "weak spatial autocorrelation".(1) It had a weak positive spatial autocorrelation in 1978, as the Moran's I was 0.3607.This spatial distribution pattern was a result of the minor land subsidence; there was only one subsidence funnel (the ground has gradually subsided from the periphery to the centre) in Yangpu District, and no significant subsidence occurred in most areas of the urban centre.(2) There was a strong spatial autocorrelation of land subsidence in the 1980s and 1990s.From 1978 to 1989, Moran's I increased sharply to 0.7095, representing a strong positive spatial autocorrelation.This result shows the subsidence funnel worsened and the subsidence range expanded.By 2000, Moran's I was still as high as 0.5707.There was still a strong positive spatial autocorrelation of land subsidence, indicating that the land subsidence and its spatial expansion in the urban centre were intensified.(3) Land subsidence has followed a weak spatial autocorrelation pattern since 2000.By 2016, Moran's I decreased to 0.20 as effective subsidence prevention measures were taken.During this period, the strong spatial autocorrelation pattern of land subsidence was gradually broken and changed to a weak spatial autocorrelation pattern, indicating that subsidence funnels were dispersed in the urban centre, but their spatial expansion was moderated. Spatial Clustering Pattern of Land Subsidence in the Urban Centre The "high/low" cluster analysis of land subsidence found that the clustering increased in the last century and decreased in the 21st century.This conclusion is consistent with the spatial autocorrelation development law described in the previous section.(1) In 1978(1) In , 1989(1) In , and 2000, each z-score was greater than 2.58 and each p-value was less than 0.01.According to Tables 4 and 7, the confidence coefficient of the "high clustering" distribution of the land subsidence in the urban centre was 99%. (2) Since 2000, land subsidence has been randomly distributed, and there have been no obvious clustering characteristics.As seen in Figure 9, (1) land subsidence followed a "high clustering" pattern in 1978, and the "high-high" clustering was mainly in Yangpu District and Huangpu District, while the "low-low" clustering was distributed in four districts-Putuo District, Changning District, Xuhui District, and Pudong New Area.(2) The high subsidence area expanded, and the low subsidence area reduced in the 1980s based on the fact that the "low-low" clustering only occurred in Xuhui District and Pudong New Area in 1989.(3) Land subsidence was high in the urban centre in the 1990s.A new subsidence funnel rapidly formed and the low subsidence area in Pudong New Area was eliminated.Moreover, a new "high-high" clustering area was formed in Changning District, while the "low-low" clustering in Pudong New Area disappeared by 2000.(4) A spatial random distribution pattern was formed after 2000.The trend of the high subsidence area was alleviated, and the "high clustering" pattern was gradually eliminated because of the adoption of various subsidence control measures. Discussion (1) The study finds that land subsidence is proven to be closely related to human activities and social development and has become a major constraint on sustainable development; therefore, preventing and controlling land subsidence is imperative for the city to be a metropolis of excellence, innovation, humanity, ecology, and culture and an international centre of economy, finance, trade, shipping, science, and technology.(2) Generally, resources and the environment are the material basis and fundamental guarantee for social progress and economic development.Social factors are the main bearing and direct drive for wealth creation and civilization.Different impacting factors have different influences on land subsidence in Shanghai.Therefore, in the process of urbanization, industrialization, and modernization, only by considering the comprehensive needs of urban development and enhancing the awareness of "natural-social" protection can Shanghai cooperate and compete better internationally and achieve the goal of sustainable development of the economy, society, As seen in Figure 9, (1) land subsidence followed a "high clustering" pattern in 1978, and the "high-high" clustering was mainly in Yangpu District and Huangpu District, while the "low-low" clustering was distributed in four districts-Putuo District, Changning District, Xuhui District, and Pudong New Area.(2) The high subsidence area expanded, and the low subsidence area reduced in the 1980s based on the fact that the "low-low" clustering only occurred in Xuhui District and Pudong New Area in 1989.(3) Land subsidence was high in the urban centre in the 1990s.A new subsidence funnel rapidly formed and the low subsidence area in Pudong New Area was eliminated.Moreover, a new "high-high" clustering area was formed in Changning District, while the "low-low" clustering in Pudong New Area disappeared by 2000.(4) A spatial random distribution pattern was formed after 2000.The trend of the high subsidence area was alleviated, and the "high clustering" pattern was gradually eliminated because of the adoption of various subsidence control measures. Discussion (1) The study finds that land subsidence is proven to be closely related to human activities and social development and has become a major constraint on sustainable development; therefore, preventing and controlling land subsidence is imperative for the city to be a metropolis of excellence, innovation, humanity, ecology, and culture and an international centre of economy, finance, trade, shipping, science, and technology.(2) Generally, resources and the environment are the material basis and fundamental guarantee for social progress and economic development.Social factors are the main bearing and direct drive for wealth creation and civilization.Different impacting factors have different influences on land subsidence in Shanghai.Therefore, in the process of urbanization, industrialization, and modernization, only by considering the comprehensive needs of urban development and enhancing the awareness of "natural-social" protection can Shanghai cooperate and compete better internationally and achieve the goal of sustainable development of the economy, society, culture, and ecology.(3) The change in spatial interaction shows that the prevention and control of land subsidence in Shanghai achieves remarkable results by gradually weakening the cumulative effect and the geological influence of the subsidence funnel on adjacent areas.These achievements confirm the positive role of subsidence treatment and indicate that the implementation of urban subsidence prevention is effective.However, additional work is needed to fully control the intensification of subsidence and completely eliminate the subsidence risk the city is facing.(4) The majority of previous studies in China [57,58] and overseas [59,60] analysed only natural factors or a single social factor [56] of land subsidence in Shanghai.Few researchers considered integrated "natural-social-economic" factors; in particular, the link between social factors and land subsidence has been ignored.However, human social activities extraordinarily contribute to a series of complex effects in Shanghai, a metropolis with a large population.This paper demonstrates the considerable relation between social factors and land subsidence and reveals that permanent residents are the most powerful social factor.At the same time, this study is based on two different perspectives, the incremental effect and scale effect, and compares the results from these two research perspectives to explore the differences of various impacting factors.The idea of dual-view research broadens the perspective of this type of research.(5) In terms of scale effect, groundwater exploitation and sea-level rise are the main impacting factors of land subsidence in Shanghai.This conclusion is consistent with the results of Wang [4], Li et al. [57], and Xu et al. [9].(6) The analysis of spatial distribution is rare in the existing studies on land subsidence in Shanghai, and most studies do not discuss spatial information.A small part of the research simply describes the objective phenomenon without quantitative spatial data support [61].The application of ESDA accomplishes numerical spatial analysis and deeply explores the spatial information of land subsidence in Shanghai based on powerful spatial analysis theories and geography calculation methods.A comprehensive and systematic interpretation of spatial information will better serve the sustainable development of the city. Conclusions (1) Land subsidence is a geological engineering problem that is slowly accumulating.The annual average land subsidence in Shanghai has distinct characteristics during the periods since economic reform and open up of China.Land subsidence has experienced four stages: "slow growth in the 1980s", "rapid growth in the 1990s", "gradual decline in the first decade of the 21st century", and "steady development in the current stage".(2) Land subsidence in Shanghai is the result of the combined effects of various factors, among which the influence of natural factors is greater than that of social factors.Sea-level rise is the most important natural factor, while the influence of groundwater exploitation has decreased in recent years.At the same time, permanent residents play the most imperative role in social factors-the large quantity and rapid growth of permanent residents are largely responsible for the land subsidence in Shanghai.(3) The rate of land subsidence in Shanghai is spatially non-uniform, and it has different spatial distributions at different stages.Moran's I analysis proves that the spatial distribution of annual average land subsidence in central urban areas has experienced the development process of "weak spatial autocorrelation" → "strong spatial autocorrelation" → "weak spatial autocorrelation" since 1978. (4) The annual average land subsidence has gradually evolved from a "high clustering" pattern to a distribution with no obvious clustering in the urban centre since 1978, and its spatial development tends to be discrete and random.That is, the distribution of subsidence funnels spreads, and an increasing number of regions are facing subsidence risks.This phenomenon deserves much attention in the overall development of the city, and we must stop it from further deteriorating areas where land subsidence occurs.Preventing the spread of subsidence areas and strictly preventing the formation of new subsidence funnels should be the focus.(5) There are some suggestions for policy makers: 1 In the 20th century, the main cause of land subsidence in Shanghai was groundwater exploitation.However, the growing population has worsened the land subsidence severely since the 21st century.The government should shift its focus to control moderate population growth when the exploitation of groundwater has been effectively controlled. 2 The most serious land subsidence occurred in Yangpu District, Hongkou District, Jing'an District, and Pudong New Area in 2016, which show government should limit the intensity of development in these areas, especially the strength of high-rise buildings. 3The study found that sea-level rise is a very important factor in worsening land subsidence.Reducing greenhouse gas emissions is therefore an unshirkable responsibility.(6) There are some limitations of the research: 1 Due to a lack of data for the last two years, the data used in this study was pre-2016. 2Future research needs to further explain the specific influencing mechanism of various impacting factors on land subsidence. 3How to predict the land subsidence based on its development still needs to be explored further. Figure 1 . Figure 1.The administrative regions of Shanghai. Figure 1 . Figure 1.The administrative regions of Shanghai. Figure 2 . Figure 2. Change in groundwater exploitation in Shanghai. Figure 2 . Figure 2. Change in groundwater exploitation in Shanghai. Figure 2 . Figure 2. Change in groundwater exploitation in Shanghai. Figure 4 . Figure 4.The number of civil vehicles in Shanghai. at the end of 2000.The area reached 219.11 km 2 in 2010 and 436.48 km 2 in 2016. Figure 4 . Figure 4.The number of civil vehicles in Shanghai. Sustainability 2018 , 10, x FOR PEER REVIEW 7 of 17 2.2.2.MethodologyFirst, we have identified several key time points according to the change in impacting factors since 1978 (Figure 5): 1989, 2000, and 2011.The factors were almost unchanged from 1978 to 1989, and began to grow slowly in 1989.They have grown rapidly since 2000.Year 2011 is very important as factors began to change obviously.For example, the number of civil vehicles reduced, the amount and area of high-rise building and GDP grew at a faster rate, the number of permanent residents began to stabilize.According to these key time points, we divided the land subsidence into four stages: 1978-1989, 1990-2000, 2001-2010, and 2011-2016.Second, Grey Correlation Degree Analysis (GCDA) of land subsidence and seven impacting factors at different stages in Shanghai from 1978 Figure 5 . Figure 5. Social factors of land subsidence in Shanghai. Figure 6 . Figure 6.The calculation process of the grey correlation degree. Figure 6 . Figure 6.The calculation process of the grey correlation degree. Sustainability 2018 , 17 Figure 7 . Figure 7. Spatial distribution of average annual land subsidence in the urban central area of Shanghai. Figure 7 . Figure 7. Spatial distribution of average annual land subsidence in the urban central area of Shanghai. Figure 8 . Figure 8. Development of land subsidence in Shanghai.Figure 8. Development of land subsidence in Shanghai. Figure 8 . Figure 8. Development of land subsidence in Shanghai.Figure 8. Development of land subsidence in Shanghai. Figure 9 . Figure 9. Land subsidence clustering pattern in the central areas of Shanghai. Figure 9 . Figure 9. Land subsidence clustering pattern in the central areas of Shanghai. Table 1 . The impacting factors of land subsidence in Shanghai. , sea level has been rising with fluctuations since 1978.Sea level rose 96.92 mm from 1978 to 1989, 48.75 mm from 1990 to 2000, and 89.17 mm from 2011 to 2016.Furthermore, sea level was generally stable from 2001 to 2010. Table 2 . Incremental data of land subsidence and impacting factors. , sea level has been rising with fluctuations since 1978.Sea level rose 96.92 mm from 1978 to 1989, 48.75 mm from 1990 to 2000, and 89.17 mm from 2011 to 2016.Furthermore, sea level was generally stable from 2001 to 2010. Table 2 . Incremental data of land subsidence and impacting factors. Table 2 . Incremental data of land subsidence and impacting factors. Table 3 . Average of statistics for different periods. Table 3 . Average of statistics for different periods. Table 4 . Critical p-values and critical z-scores under different confidence coefficient values [54]. Table 5 . The result of GCDA between the average annual land subsidence growth and the annual increase in the impacting factors. Table 6 . The result of GCDA between the cumulative land subsidence and the total amount of impacting factors. Table 7 . Analysis of clustering characteristics of land subsidence in the urban centre.
v3-fos-license
2023-08-29T06:50:29.600Z
2023-08-27T00:00:00.000
261245525
{ "extfieldsofstudy": [ "Physics" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.108.094046", "pdf_hash": "de430c30ec4aeabbe194a21df11a6f7db082afbc", "pdf_src": "ArXiv", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:120", "s2fieldsofstudy": [ "Physics" ], "sha1": "b7ad622ce12c9bf36314744fc48ed190ad1b7eb0", "year": 2023 }
pes2o/s2orc
Production of charmonium $\chi_{cJ}(2P)$ plus one $\omega$ meson by $e^+e^-$ annihilation Inspired by the recent observation of $e^+e^-\to \omega X(3872)$ by the BESIII Collaboration, in this work we study the production of the charmonium $\chi_{cJ}(2P)$ by $e^+e^-$ annihilation. We find that the $e^+e^-\to\omega\chi_{c0}(2P)$ and $e^+e^-\to \omega\chi_{c2}(2P)$ have sizable production rates, when taking the cross section data from $e^+e^-\to \omega X(3872)$ as the scaling point and treating the $X(3872)$ as the charmonium $\chi_{c1}(2P)$. Considering that the dominant decay modes of $\chi_{c0}$ and $\chi_{c2}(2P)$ involve $D\bar{D}$ final states, we propose that $e^+e^-\to \omega D\bar{D}$ is an ideal process to identify $\chi_{c0}(2P)$ and $\chi_{c2}(2P)$, which is similar to the situation that happens in the $D\bar{D}$ invariant mass spectrum of the $\gamma\gamma\to D\bar{D}$ and $B^+\to D^+{D}^- K^+$ processes. With continuous accumulation of experimental data, these proposed production processes offer a promising avenue for exploration by the BESIII and Belle II collaborations. Very recently, the BESIII Collaboration announced the observation of the charmoniumlike state X(3872) produced through the e + e − → ωX(3872) process [1].The line shape of the cross section suggests that the final state ωX(3872) may involve some nontrivial resonance structures.Indeed, there is already evidence for new resonances in the 4.7-4.8GeV energy range in several channels.In the e + e − → K 0 S K 0 S J/ψ [2], a state around 4.71 GeV is seen with a statistical significance of 4.2σ.In the measurement of the cross section from e + e − → D * + s D * − s [3], the inclusion of a resonance around 4.79 GeV is necessary to describe the data.A vector charmoniumlike state with a mass of M = 4708 +17 −15 ± 21 MeV and a width of Γ = 126 +27 −23 ± 30 MeV is reported in the process e + e − → K + K − J/ψ [4] with a significance of over 5σ.The accumulation of data around 4.72 GeV in the e + e − → ωX(3872) process is consistent with this observed resonance.Preliminary analysis indicates that the reported event cluster around 4.75 GeV in the ωX(3872) invariant mass spectrum aligns well with the predicted ψ(6D) state [5].However, we should emphasize that the present experimental data cannot conclusively confirm it, and more precise data is required to draw a definite conclusion in near future. The recent observation of the X(3872) in the process e + e − → ωX(3872) ignites our curiosity and prompts us to explore the production of the remaining χ c0 (2P) and χ c2 (2P) states, accompanied by the ω meson, through e + e − annihilation.This endeavor holds the promise of shedding further light on the properties and characteristics of these fascinating charmoniumlike states. In this study, we focus on the e + e − → ωX(3872) process, wherein electrons and positrons annihilate into intermediate vector charmoniumlike state, which subsequently transits into the final state ωX(3872) through the hadronic loop mechanism.Assuming the intermediate state as a charmoniumlike state, we investigate its decay into the ωχ cJ (2P) channels.We calculate the ratios of partial width Γ ωχ cJ (2P) among different 2P states, these ratios then are applied to evaluate the cross sections of the e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P) processes if taking the experimental cross section data of e + e − → ωX(3872) as the scaling point with the consideration of X(3872) ≡ χ c1 (2P).We obtain sizable cross sections of the e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P) processes comparable to the cross section of e + e − → ωχ c1 (2P), which can provide valuable insights for future experimental searches involved in these two channels. Contrary to the χ c1 (2P) state, the χ c0 (2P) and χ c2 (2P) states can dominately decay into D D final states [17,18].Hence, we further investigate the processes e + e − → ωχ c0 (2P) → ωD D and e + e − → ωχ c2 (2P) → ωD D. As these two processes share the same final state, the D D invariant spectrum can be utilized to identify the χ c0 (2P) and χ c2 (2P) states.However, this task is challenging due to the mass similarity between χ c0 (2P) and χ c2 (2P).Similar to the situation in observing the χ c0 (2P) and χ c2 (2P) in the B + → D + D − K + decay [21,22], precise data is imperative for successful identification.This work addresses relevant discussions and considerations in this regard.Further experimental advancements and improved data will be essential to discerning these intriguing processes accurately. This paper is organized as follows.After the introduction, we illustrate how to calculate the cross sections of e + e − → ωχ cJ (2P) in Sec.II, where a main task is to introduce the hadronic loop mechanism to study the decay of a vector charmonuimlike Y state decaying into ωχ cJ (2P).In Sec.III, we discuss the contribution of the χ c0 (2P) and χ c2 (2P) states to the D D invariant mass spectrum of the e + e − → ωχ c0,2 → ωD D channels.Finally, this paper ends with a short summery. The BESIII data of e + e − → ωX(3872) show that the ωχ cJ final state may be attributed to some nontrivial resonance structures, since there exists evidence cluster around the energy range 4.7 − 4.8 GeV [1].In fact, similar phenomena have been observed in other processes such as e + e − → K 0 S K 0 S J/ψ [2] and e + e − → D * + s D * − s [3].This data accumulation may be due to the predicted charmonium states, specifically the ψ(7S ) or ψ(6D) [5].In this context, our primary aim is not to delve into the intricacies of these resonance phenomena.Instead, we briefly highlight that the process e + e − → ωX(3872) may occur through an intermediate vector charmoniumlike state denoted as Y.This serves as a foundation for our subsequent discussions.Upon recognizing that the process e + e − → ωχ cJ (2P) proceeds through an intermediate charmoniumlike Y state as depicted in Fig. 1, we can formulate its cross section σ (e + e − → Y → ωχ cJ (2P)) as which is abbreviated as σ[ωχ cJ ] in the following discussion. Here, Γ e + e − Y is the dilepton width of Y and BR(Y → ωχ cJ (2P)) is the decay branching ratios of Y → ωχ cJ (2P).Γ Y and m Y are resonance parameter of the charmoniumlike Y state. In the subsequent discussion, our focus shifts to the calculation of the decay width of Y → ωχ cJ (2P).To elucidate the peculiar hadronic transition behavior observed in the Υ states, one employed the concept of the hadronic loop mechanism.This effective approach serves as a valuable tool for modeling the coupled channel effect [23][24][25], where a loop comprised of bottom mesons acts as a bridge connecting the initial and final states, as illustrated in previous References [26][27][28][29][30][31][32][33][34][35][36].Drawing inspiration from this framework, we can construct schematic diagrams representing the decay process Y → ωχ cJ (2P), as illustrated in Fig. 2.These decay diagrams involve charmed meson loops.The general form of the decay amplitudes can be expressed as follows: Here, D(p, m) = (p 2 − m 2 ) −1 represents the propagator, and it involves three interaction vertices denoted as V 1 , V 2 , and V 3 in each diagram.These vertices are associated with their corresponding Lorentz indices, which are appropriately contracted in the calculation.In order to explicitly formulate the decay amplitudes associated with the diagrams depicted in Fig. 2, it is essential to define the various interaction vertices, denoted as V i .These vertices encapsulate the dynamics of the interactions involved.Taking into account heavy quark symmetry, we can establish the Lagrangian that characterizes the interaction between charmonium states and charmed mesons.These Lagrangians [37][38][39] read as with , where H is related to the doublet field of charmed meson field H = 1+/ v 2 (D * µ γ µ + iDγ 5 ) by H = γ 0 H † γ 0 .The S (Q Q) and P (Q Q)µ are the S -wave and P-wave multiplets of quarkonia, ( The Lagrangian describing the interaction between charmed mesons and light vector mesons is given by [37,38,40] with Here, the light flavor vector meson matrix V reads as We can now get the Feynman rules governing the interaction vertices that are crucial for getting the decay amplitude.These rules have been collected in Table I. The decay amplitudes of the process Y → ωχ cJ (2P) can be readily derived by employing the interaction vertices specified earlier.To illustrate this, let's explicitly express the amplitudes for the specific case of Y → ωχ c1 (2P), namely: In the above amplitude expressions, a form factor F 2 (q 2 ) is introduced to account for the off-shell effect of the exchanged charmed mesons and to circumvent the divergence issue in the loop integral.This form factor takes on a dipole form defined as , where m E and q are the mass and four-momentum of the exchanged mesons, respectively.The cut-off parameter have the form Λ = m E + αΛ QCD with Λ QCD = 220 MeV and α being a dimensionless parameter in the model, which is expected to be of order 1 [41]. The total amplitude is where the factor 4 comes from the sum over isospin doublet and charge conjugation of charmed meson loops.The differential partial decay width of this decay process is where p 1 represents the center-of-mass momentum of the final state ω meson, when the initial state Y is unpolarized, we can integrate the angular distribution out and we have where the factor of 1/3 is a result of the averaging over the polarization of the initial vector state.Using a similar approach, we can investigate the partial widths for the decays Y → ωχ c0 (2P) and Y → ωχ c2 (2P). To calculate the partial widths of the decays Y → ωχ cJ (2P), we need to determine various coupling constants appearing in Table I.The coupling of quarkonium multiplet with charmed mesons in Eq. ( 3) shares common coupling constants g S and g P .Thus, the coupling constants g ψD ( * ) D ( * ) and g χ cJ D ( * ) D ( * ) are related to g S and g P by [40] TABLE I. Feynman rules for the interaction vertexes. With these preparations, the ratios of different Γ ω χ cJ can be calculated now.The calculated ratios of different decay channels are shown in Fig. 3.For our calculations, we consider a range of values for the cut-off parameter α spanning from 3 to 5. It's worth noting that our selection of α falls within a safe range that avoids introducing branch points in the loop integral 1 .within this range of α, the ratios of the decay widths Γ χ cJ (2P) exhibit only gradual variations in response to changes 1 The dipole form factor F (q 2 ) resembles a propagator, and the cut-off pa-in the cut-off parameter α.Considering a range of α values from 3 to 5, we obtain the following ratios for the different decay widths: In this context, the partial widths of the three decay channels Y → ωχ cJ (2P) (J = 0, 1, 2) exhibit a similar magnitude, with Γ ωχ c0 being slightly greater than Γ ωχ c1 and Γ ωχ c2 .According to Eq. ( 1), the relative ratios of the production cross sections for different χ cJ (2P) states can be expressed as We employ these ratios in Eq. ( 16) combined with Eq. ( 17) to estimate the cross sections for e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P), utilizing the BESIII data of the cross section of e + e − → ωX(3872) as the reference scaling point.To begin, we assume the existence of an intermediate charmoniumlike state Y with mass m Y = 4745 MeV and width Γ Y = 30 MeV.We then fit the experimental data of e + e − → ωX(3872) using Eq. ( 1) for e + e − → ωχ c1 (2P).The fitting result is shown in Fig. 4. Subsequently, leveraging the obtained ratios of partial widths as described in Eq. ( 16), we proceed to estimate the cross sections for e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P).These calculations reveal that the cross sections for the e + e − → ωχ cJ (2P) processes are of a comparable magnitude, with e + e − → ωχ c0 (2P) being slightly larger than the others.Through this study, we underscore the promising prospects for the exploration of the e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P) processes at BESIII and Belle II in the upcoming years. rameter Λ effectively acts as a mass term.This choice prevents the introduction of additional branch cuts in the loop integral, which could occur when specific mass conditions are met, such as Λ + m D ( * ) = m χ cJ . We should emphasize the importance of analyzing the D D invariant mass spectrum to establish the χ c0 (2P) and χ c2 (2P) states.As a good candidate for the charmonium χ c2 (2P) [17], charmoniumlike state Z(3930) in γγ → D D was reported by the Belle Collaboration [20].Later, Belle observed a charmoniumlike state X(3915) in γγ → J/ψω [16].The Lanzhou group proposed the χ c0 (2P) assignment to the X(3915) [17].In establishing the χ c0 (2P) and χ c2 (2P) states, we should answer several serious questions [46,47]: 1) Why is the signal of the X(3915) signal missing in the experimental data of the D D invariant mass spectrum from γγ → D D? 2) Why is the mass gap between the X(3915) and Z(3930) so small?Faced with these questions, the Lanzhou group pointed out that the measured D D invariant mass spectrum of γγ → D D may contain both the χ c0 (2P) and χ c2 (2P) signals [48].And the small mass gap between the X(3915) and Z(3930) can be well explained by considering the coupled channel effect and node effect [18], especially the narrow width of the χ c0 (2P) can also be explained.In 2020, the LHCb Collaboration found the χ c0 (2P) and χ c2 (2P) in the B + → D + D − K + decay [21,22], where both χ c0 (2P) and χ c2 (2P) were observed in the D + D − invariant mass spectrum, confirming these predicted properties of the χ c0 (2P) and χ c2 (2P) [17,[48][49][50].From this brief review, we can realize that the D D invariant mass spectrum plays an important role in deciphering the nature of the Z(3930) and X(3915) and in establishing the χ c0,2 (2P) states. Utilizing the calculated ratios of Γ ωχ cJ (2P) and considering the D D decay channel of χ c0 (2P) and χ c2 (2P), we can make a rough prediction of the cross section of e + e − → ωD D. As in the previous section, assuming that there is a charmonium- Here, the results are properly normalized. The corresponding D D invariant mass spectrum of e + e − → ωD D is shown in Fig. 5 (a).Here, the ratios of the partial width of different ωχ cJ (2P) channels are taken to be Γ ωχ c0 (2P) : Γ ωχ c1 (2P) ≈ 4.0 and Γ ωχ c2 (2P) : Γ ωχ c1 (2P) ≈ 1.0, these values serve as a typical value of our predicted range in Eq. ( 16).The partial width of the D D channel of the χ c0 (2P) and χ c2 (2P) channels are set to be 100% and 60%, respectively.The small mass gap, m χ c0 (2P) − m χ c2 (2P) < 5 MeV according to the LHCb measurement [21,22], make it not an easy task to distinguish the χ c0 (2P) and χ c2 (2P) states directly from the D D channel, we suggest that future experiments like BESIII and Belle II focus on this issue with the accumulation of more precise data. We also consider the angular distributions of the processes Y → ωχ c0 (2P) and Y → ωχ c2 (2P), which may be a way to discriminate the χ c0 (2P) and χ c2 (2P) states in this process.The angular distribution of Y → ωχ cJ (2P) will be uniform if the produced charmnoniumlike state Y is unpolarized, as indicated by rotational invariance.But the J PC = 1 −− state produced directly from e + e − annihilation is polarized, its polarization state is characterized by the spin density matrix ρ Y = 1 2 m=±1 |1, m⟩⟨1, m|.As a result of this polarization of the vector state Y, the decay process Y → ωχ cJ (2P) will have angular distribution of the form: dN Ndcosθ ∝ 1+α Y cos 2 θ, where θ denotes the polar angle of ω in the center of mass frame of the e + e − system and α Y is a coefficient determined by the decay amplitude.Considering the spin density matrix ρ Y , the angular distribution parameter α Y can be calculated directly from the differential partial width in Eq. ( 13). In Fig. 5 (b), we show the angular distribution of Y → ωχ c0,2 (2P) at √ s = m Y .These angular distributions are given when the cut-off parameter α is taken as a typical value of 4. The angular distribution of the Y → ωχ c0 (2P) process is almost uniform, while that of the Y → ωχ c0 (2P) process is not.However, the difference between the two types of angular distribution is not obvious, which can be distinguished based on large data example in experiments. Our results reveal that the widths Γ ωχ cJ (2P) are of comparable magnitude.Treating the X(3872) to be χ c1 (2P), we further estimate the cross sections for e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P) by employing the experimental data from e + e − → ωX(3872) as the reference scaling point.This information strongly suggests that e + e − → ωχ c0 (2P) and e + e − → ωχ c2 (2P) could indeed be within the reach of experiments like BESIII and Belle II, especially with enhancements of experimental data. FIG. 1 . FIG. 1.The schematic diagram of the production of the χ cJ (2P) via e + e − annihilation. FIG. 2 . FIG.2.The allowed diagrams of the Y → ωχ cJ (2P) decays by considering the hadronic loop mechanism. FIG. 3 . FIG.3.The dependence of the ratios of partial widths of three decay channels Y → ωχ cJ (2P) on α.
v3-fos-license
2021-05-01T06:17:15.305Z
2021-04-21T00:00:00.000
233463477
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1099-4300/23/5/495/pdf", "pdf_hash": "12cf5149aa97c2ae0528f3166775c4ca73c4663b", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:121", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "711b47cfc6bb73edece3f8f43658886f67a68303", "year": 2021 }
pes2o/s2orc
Research on Multi-Dimensional Optimal Location Selection of Maintenance Station Based on Big Data of Vehicle Trajectory In order to rationally lay out the location of automobile maintenance service stations, a method of location selection of maintenance service stations based on vehicle trajectory big data is proposed. Taking the vehicle trajectory data as the demand points, the demand points are divided according to the region by using the idea of zoning, and the location of the second-level maintenance station is selected for each region. The second-level maintenance stations selected in the whole country are set as the demand points of the first-level maintenance stations. Considering the objectives of the two dimensions of cost and service level, the location model of the first-level maintenance stations under two-dimensional programming is established, and the improved particle swarm optimization algorithm and immune algorithm, respectively, are used to solve the problem. In this way, the first-level maintenance stations in each region are obtained. The example verification shows that the location selection results for the maintenance stations using the vehicle trajectory big data are reasonable and closer to the actual needs. Introduction Location selection has an important impact on such aspects as public facilities, maintenance service stations, logistics distribution centers, gas stations, charging stations, and so on. The proper location can not only reduce the cost, but also increase customer satisfaction. The heavy-duty vehicle repair service station is an indispensable piece of infrastructure for vehicle travel. It is responsible for important maintenance and service functions. During the driving of the vehicle, it needs to be maintained, repaired, and replaced from time to time. As the core part of automobile after-sales service, the service station plays a vital role in automobile after-sales service. Therefore, in the fierce modern market environment, the appropriate location of the maintenance station is particularly important. How to determine the location of the maintenance station has become one of the common needs of enterprises and society. Scholars at home and abroad have done a lot of research on the theory and method of site selection, and achieved corresponding results. Some scholars build quantitative mathematical models through objective function and constraint conditions to study the location problem, and solve it by heuristic algorithm. Dan et al. [1] proposed the application of a stochastic programming model to study the location problem of distribution centers under uncertain demand. Lan et al. [2] studied the location of the distribution center based on uncertain customer demand and the fixed cost of the distribution center built. Some scholars have also proposed multi-level planning site selection, and separately established models for different levels of problems for comprehensive solutions. Wu et al. [3] proposed a two-level programming model combining location optimization and distribution allocation to optimize the location problem, and designed a heuristic solution algorithm combining a genetic algorithm and Frank-Wolfe algorithm. Wei et al. [4] established a multi-objective location model for bus-filling stations with the first objective of minimizing the construction cost of gas stations and the second objective of minimizing the gas-filling costs of all buses. Li et al. [5] used a factor-scoring method to screen the location of logistics supermarkets based on key factors, and then established a two-level planning location model for the initial location of logistics supermarkets. With the development of big data technology, many scholars apply big data technology to the problem of site selection. Li et al. [6] used big data, such as user payment method and distance, to reasonably select the location of an electricity fee payment point for the rural electric industry business area, which has obvious practical significance. Yang et al. [7] conducted mining and analysis on electric vehicle travel modes and massive movement data to determine the locations of electric vehicle charging piles. This method accurately locates the demand for electric vehicles and increases the accuracy of site selection. Zhang et al. [8] conducted a systematic analysis on the locations of logistics parks using big data, and gave a complete set of park location plans. Wu et al. [9] used big data to select the location of a logistics distribution center, and found out the actual demand location by screening and analyzing customers' online orders. Wang et al. [10] established an economic model and a location model by analyzing various factors that affect the location of a substation; this was based on a distributed design incorporating various data, such as remote sensing data and environmental factors, and using an analytic hierarchy process combined with big data analysis mode to design and select the sites of substations. In terms of the location of maintenance stations, Ye et al. [11] conducted a nuclear density analysis of travel hotspots, taking maximum coverage as the objective function and considering coordination with the city, to study the location of taxi service stations. Xie et al. [12] and others used the center of gravity method to select alternative repair stations, and then used the analytic hierarchy process and fuzzy evaluation method to select the location of the after-sales service station. This is a common idea and method of site selection. To sum up, the models and methods of site selection are constantly improving and developing, and the application of emerging technologies also makes site selection more efficient and accurate. However, there is little research on the location of maintenance stations, and especially on the application of big data technology to the location of maintenance stations. Aiming at the problem of heavy-duty vehicle maintenance station location, this paper uses big data technology to collect and organize data and establish a twodimensional planning location model. First, the trajectory data in the Internet of Vehicles system is collected and processed, and the k-means clustering algorithm is used to analyze and divide the distribution state of vehicle-driving into regions. Then, the location selection of the first-level maintenance stations is considered under the two-level planning based on the location results of the second-level maintenance stations, and the particle swarm algorithm and improved particle swarm algorithm are used to solve the constraint cost minimization model, while the immune algorithm is used to solve the maximum service level model. Finally, an example is given. Problem Description In the fierce market competition environment of heavy-duty vehicles, improving the after-sales service level is the key to the success of enterprises, and the establishment of a perfect after-sales service system is the top priority. This paper studies the locations of the maintenance stations of Heavy Truck Group Co., Ltd (Baotou, China). The company pays attention to the construction of after-sales service stations, but the original service station can only provide basic repairs and storage functions for some spare parts, which make it difficult to meet the needs of major repair and vehicle maintenance. In addition, Entropy 2021, 23, 495 3 of 18 due to the different requirements for maintenance services in various regions, the original maintenance station has problems such as insufficient demand, resulting in excessive resource usage, or excessive demand, resulting in insufficient resources and poor service quality. In order to better provide users with more comprehensive after-sales service, the problem of maintenance station location optimization needs to be solved. Therefore, this paper studies the optimization of maintenance station location. According to the actual needs of heavy-duty vehicles and the requirements of construction cost, the location of maintenance stations is divided into two levels. Table 1 shows the attributes of maintenance stations at all levels. This section constructs the location model of the maintenance station, and the specific problem can be described as follows: on the basis of vehicle trajectory big data, a k-means algorithm and set coverage model are used to select the second-level maintenance stations across the country [13]. The secondlevel maintenance stations are widely distributed, and are mainly responsible for the daily maintenance activities in a certain area. The speed of response is the standard to evaluate their excellent performance. The second-level maintenance stations are all over the country, which can meet the basic maintenance needs of vehicles. The first-level maintenance station is the maintenance center and distribution center in a certain area, which functions as a hub and can realize the timely deployment of maintenance service resources. Taking the second-level maintenance station as a demand point, combined with the transportation, construction, operation cost, and service level, the location of the first-level maintenance station under multi-dimensional planning is carried out to ensure that users get the best maintenance service and pure spare parts supply in the shortest time. Establishment of Double-Dimensional Planning Location Model Most of the traditional multi-objective solutions take cost minimization or revenue maximization as objective functions, which usually contain multiple impact factors, not a single cost factor. In the process of solving the problem, some factors have been quantified, which are usually normalized and ignored, which results in biased results to a certain extent. Therefore, based on this concept, this paper proposes double-dimensional planning, cost minimization, and service level maximization. An improved particle swarm optimization algorithm and immune algorithm are used to solve the model. In the obtained results, a trade-off analysis is carried out, and the final result is obtained. The model is assumed as follows: 1. The first-level maintenance station is transformed on the basis of the second-level maintenance station, so the new first-level maintenance station is selected from the second-level maintenance station that has been selected; 2. The maintenance capacity and inventory capacity of the first-level maintenance station are not limited; 3. The cost of establishing and operating a maintenance station is fixed and known (including its land cost, storage cost, transportation cost, etc.). Establishment of Cost-Minimization Model The first-level maintenance station location problem is to select the appropriate place relative to the second-level maintenance station to construct the first-level maintenance station, so as to minimize the sum of the fixed cost of the facility and the associated transportation cost. The model seeks to minimize all kinds of costs, which are mainly divided into two parts: • The fixed cost required for the construction of the maintenance station, including the expansion's land cost, construction cost, and management and operation cost; • Distance cost from demand point to maintenance station and weight cost. The cost minimization model is shown in Equation (1). where C is the total cost, C E is land cost, C B is construction cost, C M is the management and operation cost, S di is the distance from the second-level to the first-level maintenance station, and β is the transportation rate from the second-level to the first-level maintenance station, indicating the freight rate per unit of transport weight. I is the first-level maintenance station, expressed as i 1 , i 2 , . . . , i n ; D is the second-level maintenance station (demand point), expressed as d 1 , d 2 , . . . , d m . Establishment of Service Level Maximization Model Service level can be reflected by distance. Under the condition of a certain speed, the closer the distance is, the shorter the time will be. Therefore, it is necessary to calculate the minimum distance between the first-level maintenance station and the second-level maintenance stations in its region, so as to measure the maximum service level. The greater the distance, the lower the level of service, otherwise the opposite. The model of service level maximization is transformed into Equation (2). where P S is the service level, S is the total distance, S di is the distance between secondlevel maintenance station d (demand point) and first-level maintenance station i (supply point), and y di is the relationship between second-level maintenance station and first-level maintenance station. The constraints are as follows: Among them, k is the number of first-level maintenance stations that need to be selected in the area, and r represents the correlation coefficient. Constraint (3) ensures that each second-level maintenance station (demand point) is only distributed by one first-level maintenance station. Constraint (4) indicates that the total number of selected first-level maintenance stations in the area is k, where r i indicates whether point i is selected as the first-level maintenance station; if it is, it is 1, otherwise it is 0. Constraint (5) means that the two variables are 0-1 variables. Algorithm Introduction and Design The advantages and disadvantages of location are reflected in the rationality of the location model algorithm, so it is an NP problem to choose the appropriate model algorithm to solve the location problem. The methods to solve the logistics location problem mainly include precise algorithms and heuristic algorithms. The location of the maintenance station is a large-scale location selection, and an accurate algorithm can only solve the small-scale location optimization problem. In practical problems, when the location scale is large, it is necessary to design a heuristic algorithm to solve the model. Many scholars have also carried out relevant research. For example, the simulated annealing algorithm [14,15], genetic algorithm [16,17], ant colony algorithm [18], particle swarm algorithm [19], immune algorithm [20], and other algorithms are used to solve the location model. Because there are many demand points and candidate points in this paper, the particle swarm optimization algorithm can speed up the solution of the results by adjusting the number of particles, and the optimization ability is stronger. However, many scholars have improved the particle swarm algorithm in many aspects [21][22][23][24][25], gradually introducing inertia weight, acceleration factor, mixing degree, adaptive factor, and so on. It has been proved by examples that the improved particle swarm algorithm has high efficiency and accuracy. The immune algorithm has a better solution ability in solving multi-value problems. In this paper, the location of the first-level maintenance station is not a single facility, but a group of locations that need to be found, which belongs to a large-scale chain location. Therefore, the improved particle swarm optimization algorithm and immune algorithm are used to solve the double-dimension programming model. Particle Swarm Optimization Algorithm Particle swarm optimization (PSO) was proposed by Kennedy J. and Eberhart R. C. in 1995 [26]. It searches for the optimal solution by simulating the foraging behavior of birds. This paper uses the algorithm to optimize the total construction cost of a maintenance station. The equations are as follows: In the above equation, c 1 and c 2 are acceleration factor constants; r k 1 , r k 2 are random numbers in the range of 0-1; v is the particle velocity, v k kt is the velocity of the t-th iteration of the particle (current velocity), v k+1 kt is the velocity of the t+1-th iteration; x is the particle position, x k kt is the particle position (current position) of the t-th iteration, x k+1 kt is the position of the t+1-th iteration; k is the current number of iterations, p best is the previous best position of a single individual, and g best is the previous best position of the whole population. (p best − x id ) means the particle's understanding of itself, also known as the "cognition" part, which can guide the particle to its historical best position. The third part (g best − x id ) is called "social knowledge". Each particle guides all particles in the group to approach the global optimal solution by sharing information. In this paper, on the basis of Equation (6), the weight factor is added to accelerate the global optimal solution. The improved equations are as follows: w is the inertia weight of particle change, which is used to indicate the degree of particle keeping the original speed. The inertia weight is set to affect the balance between the local search ability and the global search ability of particles, which mainly represents the impact of the speed of the previous generation on the speed of this generation. The larger the inertia weight w, the greater the influence of the speed of the previous generation on the current, and the particles will move along at the speed of their previous generation to a large extent. T is the maximum number of iterations, and t is the current number of iterations. The particle swarm optimization method adopts integer coding in the process of locating the first-level maintenance station, and the coding number of particles in the first-level maintenance station corresponds to that of the alternative one. N candidate first-level maintenance station particle A was successively numbered according to 1 to N, and the i-th particle of the first-level maintenance station particle swarm was If the digit value of A iϕ (t) in the particle is n, then this location is the n-th selected first-level maintenance station. If the position is 0, it means that the first-level maintenance station of the position is not selected. The particle B of the second-level maintenance station (demand point) is numbered in sequence from 1 to K, and the i-th particle of the first-level maintenance station particle swarm was is m, the demand point will be serviced by the m-th first-level maintenance station selected to be established. impact of the speed of the previous generation on the speed of this generation. The larger the inertia weight w , the greater the influence of the speed of the previous generation on the current, and the particles will move along at the speed of their previous generation to a large extent. T is the maximum number of iterations, and t is the current number of iterations. The particle swarm optimization method adopts integer coding in the process of locating the first-level maintenance station, and the coding number of particles in the first-level maintenance station corresponds to that of the alternative one. N candidate first-level maintenance station particle A was successively numbered according to 1 to N , and the i -th particle of the first-level maintenance station particle swarm was in the particle is n , then this location is the n -th selected first-level maintenance station. If the position is 0, it means that the first-level maintenance station of the position is not selected. The particle B of the second-level maintenance station (demand point) is numbered in sequence from 1 to K , and the i -th particle of the first-level maintenance station particle swarm was m , the demand point will be serviced by the m -th first-level maintenance station selected to be established. Figure 1 demonstrates the structural relationship between maintenance stations. When two first-level maintenance stations are selected from five alternative first-level maintenance stations for eight second-level maintenance stations, after discretization, particles A and B are Immune Algorithm The immune algorithm (IA) was proposed by T. Fukuda and others [27] in 1998. It is similar to the genetic algorithm, which imitates the genetic evolution law of the biological world. The immune algorithm is inspired by the theory of the biological immune system. In this paper, the immune algorithm is used to solve the service level maximization model of the first-level maintenance station. The initial population of antibodies is generated. The number of selected maintenance stations in the plan is represented by a coded serial number of length p. The coded serial number indicates the serial number of the alternative first-level maintenance station. This case uses the real number encoding method. If the service network consists of 35 second-level maintenance stations, the second-level maintenance stations represented by number {1, 2, . . . , 35} may be selected as the first-level maintenance stations. Then, 2 of the 35 second-level maintenance stations are selected as the first-level maintenance stations. For example, when antibody {2, 8} or antibody {2, 14} is selected, it means that secondlevel maintenance stations 2 and 8 or maintenance stations 2 and 14, corresponding to the antibody numbers, will be selected as first-level maintenance stations in the region. This step can ensure that each demand point is met. The affinity between an antibody and an antigen is used to indicate the recognition of an antigen by an antibody. The function expression is as follows: In Equation (11), the second term of the denominator is a penalty term, in which the letter c is regarded as a relatively large positive integer, which means that if the distribution distance is too far and exceeds the constraints in the model, it will be punished. B v belongs to the penalty function. Regarding the affinity between antibodies, the matching degree between antibodies is represented by the method of R-bit continuity. Firstly, the value R is determined, which represents the threshold value for judging the affinity between antibodies. The affinity function S b between antibodies is as follows: In Equation (12), β is the same number of digits between antibodies, and l is the length of antibodies. For example, two antibodies with the same length are {1, 2, 3, 4} and {3, 4, 5, 6}; after comparison, the two values are the same. In this case, the similarity affinity between antibodies is 0.5. The antibody concentration is calculated. Antibody concentration is the similar proportion of antibody among all antibodies in the population, which can be expressed as follows: In Equation (13), C v is the ratio between antibody and antibody group, that is, antibody concentration; M represents the number of antibody species; and S vi represents the similarity between antibody v and antibody i. Expected reproduction probability is calculated. The expected reproduction probability is also called the incentive degree, which is determined by the affinity between the antibody and the antigen and the antibody concentration. It can be expressed as: In Equation (14), λ is a constant. From this function, it can be seen that the expected reproduction probability increases with the increase of individual fitness, and decreases with the increase of individual concentration. This paper uses MATLAB to solve the improved particle swarm optimization and immune algorithm. In order to simplify the understanding of the algorithm solution model, a simple example is given below. In addition, the pseudo code of all algorithms for solving the paper model is shown in Appendix A. An automobile maintenance company plans to select 2 out of 10 second-level maintenance stations to be converted into firstlevel maintenance stations. Table 2 shows the known attributes of each second-level maintenance station. Using the above improved particle swarm optimization algorithm and immune algorithm to solve the two dimensions of cost and service level, respectively, the algorithm parameters are consistent with the example verification in the next section, and will not be repeated. The results are summarized as follows: Table 3 shows the location results of the first-level maintenance station. Scheme 1 is the result of improving the particle swarm to solve the minimum cost, and its service level is calculated by calculating the total distance based on the solved scheme; Scheme 2 is the result of the immune algorithm solving the maximum service level, and the total cost is calculated according to the cost function according to the solved scheme. From the solution results, it can be seen that the total cost and service level of Scheme 1 are better than Scheme 2, so the location scheme is determined as Scheme 1. Example Verification This paper takes the location of the maintenance service station of a heavy-duty truck company as an example, and launches a study on the optimization of the location of maintenance stations nationwide. The vehicle trajectory, historical maintenance records, and current maintenance station location information in the article are all provided by the company. Taking big data such as vehicle trajectory and historical maintenance records, the data is analyzed and summarized. The original data contains a lot of invalid information, so the data needs to be processed. Table 4 shows the information contained in the Internet of Vehicles data. It includes vehicle identification number, vehicle stop point, longitude and latitude coordinates of track, data date, height, mileage, and so on. When dividing the region, only the latitude and longitude coordinates need to be considered to achieve the clustering effect. Therefore, the data is eliminated, and the effective number of decimal places is retained to obtain the data required by the target. This paper takes driving trajectory, stopping point, and so on, as demand points, that is, the places where vehicles arrive are all demand points. The data of historical records for 10 consecutive days are selected for data cleaning to obtain 76,160 pairs of geographic coordinates. Figure 2 shows the visualization of the vehicle trajectory drawn by ArcGIS. Region Division of Second-Level Maintenance Stations In this paper, the big data of vehicle trajectory is processed, and the k-means algorithm and set coverage model are used to select the second-level maintenance stations nationwide. In total, 285 maintenance stations (second-level maintenance stations) are determined nationwide. Figure 3 shows the distribution map of existing second-level maintenance stations. When combined with Figure 2, it can be seen that the areas where vehicles arrive are relatively scattered, but they are concentrated within the area. In order to determine the location of the first-level maintenance station more reasonably, the second-level maintenance stations in the whole country are divided into regions, and the first-level maintenance station in each region is selected. The track on the left and right side of the upper part of the whole trajectory is dense, and there is no track in the middle part. The northeast corner and northwest corner are recorded as two regions, and the middle part is relatively concentrated and densely distributed. According to the four directions of Region Division of Second-Level Maintenance Stations In this paper, the big data of vehicle trajectory is processed, and the k-means algorithm and set coverage model are used to select the second-level maintenance stations nationwide. In total, 285 maintenance stations (second-level maintenance stations) are determined nationwide. Figure 3 shows the distribution map of existing second-level maintenance stations. When combined with Figure 2, it can be seen that the areas where vehicles arrive are relatively scattered, but they are concentrated within the area. In order to determine the location of the first-level maintenance station more reasonably, the second-level maintenance stations in the whole country are divided into regions, and the first-level maintenance station in each region is selected. The track on the left and right side of the upper part of the whole trajectory is dense, and there is no track in the middle part. The northeast corner and northwest corner are recorded as two regions, and the middle part is relatively concentrated and densely distributed. According to the four directions of southeast, northwest, and northwest, they are divided into four regions, and the trajectory is divided into six regions in total. Therefore, the initial clustering data is set as 6. Figure 4 shows the distribution of second-level maintenance stations by using the k-means algorithm. relatively scattered, but they are concentrated within the area. In order to determine the location of the first-level maintenance station more reasonably, the second-level maintenance stations in the whole country are divided into regions, and the first-level maintenance station in each region is selected. The track on the left and right side of the upper part of the whole trajectory is dense, and there is no track in the middle part. The northeast corner and northwest corner are recorded as two regions, and the middle part is relatively concentrated and densely distributed. According to the four directions of southeast, northwest, and northwest, they are divided into four regions, and the trajectory is divided into six regions in total. Therefore, the initial clustering data is set as 6. Figure 4 shows the distribution of second-level maintenance stations by using the k-means algorithm. According to the clustering results, the number of second-level maintenance station in the six regions is 35, 65, 76, 36, 40, and 33, respectively. The enterprise cost requires tha 15 of 285 second-level maintenance stations should be selected as the first-level mainte nance stations, and the distribution of each region should be balanced. Each first-leve maintenance station needs to meet the demand of 19 second-level maintenance stations and round off according to a certain linear proportion. Therefore, Table 5 shows that each region contains demand points, and shows the number of maintenance stations at vari ous levels. According to the clustering results, the number of second-level maintenance stations in the six regions is 35, 65, 76, 36, 40, and 33, respectively. The enterprise cost requires that 15 of 285 second-level maintenance stations should be selected as the first-level maintenance stations, and the distribution of each region should be balanced. Each first-level maintenance station needs to meet the demand of 19 second-level maintenance stations, and round off according to a certain linear proportion. Therefore, Table 5 shows that each region contains demand points, and shows the number of maintenance stations at various levels. Parameter Calculation of Second-Level Maintenance Station The k-means algorithm divides the second-level maintenance stations nationwide into six regions. Thirty-five second-level maintenance stations in region 1 are selected as examples for verification. Table 6 shows the information of 35 second-level maintenance stations. The actual distance between each second-level maintenance station is calculated by geographic coordinates. Table 6 shows the attributes of the 35 second-level maintenance stations (demand points) in the first region, including the location, demand, and construction cost of the secondary maintenance stations. Among them, the demand is the weight of spare parts required by each second-level maintenance station, and the fixed cost is the cost of transforming the second-level maintenance station into a first-level maintenance station, including the expansion's land cost, construction cost, and management and operation cost. The fixed cost is estimated according to the local land rental fee, construction fee, labor cost, and other costs. Solution of Location Model of First-Level Maintenance Station in Region Here, the cost minimization model solution is presented. Taking the second-level maintenance station in the first region as the demand point, the above-mentioned attributes are considered; the rate matrix, distance matrix and other combined cost functions are established; and the improved particle swarm algorithm is used to locate the first-level maintenance station in a certain region. The parameters of particle swarm optimization are set as follows: population size s = 100, iterations gen = 2500, c 1 = c 2 = 2, the range of inertia weight of particle change w is 0.4-0.95. Table 7 shows the results of solving the cost function by particle swarm optimization and improved particle swarm optimization, while Figure 5 shows the fitness function. 5. Particle swarm and improved particle swarm algorithm to solve the convergence situation. It can be seen from Table 6 that, using the particle swarm optimization algorithm to solve the cost function, the demand points 14 and 34 are selected as the first-level maintenance stations. The improved particle swarm optimization algorithm selects the demand points 2 and 14 as the first-level maintenance stations, and the cost is lower when the demand points 2 and 14 are selected as the first-level maintenance station. Combining this with Figure 5, it can be concluded that the improved particle swarm optimization algorithm has a faster convergence speed, so the improved particle swarm It can be seen from Table 6 that, using the particle swarm optimization algorithm to solve the cost function, the demand points 14 and 34 are selected as the first-level maintenance stations. The improved particle swarm optimization algorithm selects the demand points 2 and 14 as the first-level maintenance stations, and the cost is lower when the demand points 2 and 14 are selected as the first-level maintenance station. Combining this with Figure 5, it can be concluded that the improved particle swarm optimization algorithm has a faster convergence speed, so the improved particle swarm optimization algorithm has better performance. Here, the service level maximization solution is presented. When considering the service level, this paper mainly considers the response of delivery arrival time. Assuming that the rate is constant, the total distance can reflect the quality of service level. The immune algorithm is used to obtain the distribution of the distribution requirements of the second-level maintenance stations in their respective ranges when the total distance is the smallest. The basic parameters of the immune algorithm are set as follows: population size n = 50, memory bank capacity o = 10, and number of iterations g = 100. Crossover probability p c = 0.5, mutation probability p m = 0.4, diversity evaluation parameter p s = 0.95. Table 8 shows the results of the immune algorithm for the service level. Figure 6 shows the optimal distance fitness convergence curve. Figure 7 shows the location distribution relationship between the first-level maintenance station and the second-level maintenance station. It can be seen from Table 8 that the result of using the immune algorithm to solve the service level (minimum total distance) is that the demand points 27 and 9 are selected as the first-level maintenance stations, and the total distance is 4332 km. In order to further compare the solution results of the improved particle swarm algorithm and the immune algorithm, the service level (total distance) of the solution solved by the improved particle swarm algorithm and the total cost of the solution solved by the immune algorithm are solved separately. Table 9 shows the comparison solution. Scheme 1 is the result of the immune algorithm solving the maximum service level, and its total cost is calculated according to the solved scheme according to the cost function; Scheme 2 is the result of improving the particle swarm to solve the minimum cost, and its service level is calculated by calculating the total distance based on the solved scheme. 6. Immune algorithm to solve the convergence situation. Figure 6. Immune algorithm to solve the convergence situation. It can be seen from Table 8 that the result of using the immune algorithm to solve the service level (minimum total distance) is that the demand points 27 and 9 are selected as the first-level maintenance stations, and the total distance is 4332 km. In order to further compare the solution results of the improved particle swarm algorithm and the immune algorithm, the service level (total distance) of the solution solved by the improved particle swarm algorithm and the total cost of the solution solved by the immune algorithm are solved separately. Table 9 shows the comparison solution Scheme 1 is the result of the immune algorithm solving the maximum service level, and its total cost is calculated according to the solved scheme according to the cost function It can be concluded from Table 9 that the result of solving with the minimum total cost as the constraint is that the minimum cost of Scheme 2 is lower than the minimum cost of Scheme 1, and the total cost of Scheme 1 is 1.07 times that of Scheme 2. Taking the service level as the constraint, the result is that the total distance of Scheme 1 is much smaller than that of Scheme 2, and the service level of Scheme 1 is 1.87 times that of Scheme 2. According to the company's concept of simultaneous development of reducing cost and improving service level, the importance of total distance and service level in the location of maintenance station is set to 1:1, while the actual meaning represented by Scheme 1 is that when the total cost is small, the optimal service level is achieved. Then the location strategy of the first-level maintenance station in the final area is determined as in Scheme 1. The other five regions are solved according to the above-mentioned location ideas and algorithms, and the final national first-level maintenance station and second-level maintenance station can be obtained. Figure 8 shows the region division of maintenance station locations. Figure 9 shows the distribution of final location results. the location strategy of the first-level maintenance station in the final area is determined as in Scheme 1. The other five regions are solved according to the above-mentioned location ideas and algorithms, and the final national first-level maintenance station and second-level maintenance station can be obtained. Figure 8 shows the region division of maintenance station locations. Figure 9 shows the distribution of final location results. Conclusions Based on the big data of the vehicle trajectory, this paper proposes a method for selecting the location of maintenance stations by partition and classification. Taking the big data of vehicle trajectory as the demand point, the maintenance stations are divided into the first-and second-level maintenance stations according to the actual needs of the vehicles, and the responsibilities and functions of maintenance stations at all levels are defined, so as to provide accurate services for vehicles in the region. In addition, the idea of zoning site selection avoids the problem of insufficient site selection results due to uneven demand. An improved particle swarm algorithm and immune algorithm are used to determine the multi-dimensional location of the first-level maintenance stations. The multi-dimensional planning location model considers all kinds of practical factors, which makes the results more accurate. A two-dimensional planning model is established considering cost minimization and service level maximization, using the improved particle swarm and immune algorithm to determine the first-level maintenance stations. The improved particle swarm can speed up the optimization speed and reduce the cost at the same time, which proves the effectiveness of the algorithm improvement. However, the above research still has some shortcomings. The paper's processing of the Internet of Vehicles data is not refined enough, and at the same time, it does not consider updates of the Internet of Vehicles data, that is, it does not consider the dynamic demand. In addition, the paper only considers two aspects of service level and related Conclusions Based on the big data of the vehicle trajectory, this paper proposes a method for selecting the location of maintenance stations by partition and classification. Taking the big data of vehicle trajectory as the demand point, the maintenance stations are divided into the first-and second-level maintenance stations according to the actual needs of the vehicles, and the responsibilities and functions of maintenance stations at all levels are defined, so as to provide accurate services for vehicles in the region. In addition, the idea of zoning site selection avoids the problem of insufficient site selection results due to uneven demand. An improved particle swarm algorithm and immune algorithm are used to determine the multi-dimensional location of the first-level maintenance stations. The multidimensional planning location model considers all kinds of practical factors, which makes the results more accurate. A two-dimensional planning model is established considering cost minimization and service level maximization, using the improved particle swarm and immune algorithm to determine the first-level maintenance stations. The improved particle swarm can speed up the optimization speed and reduce the cost at the same time, which proves the effectiveness of the algorithm improvement. However, the above research still has some shortcomings. The paper's processing of the Internet of Vehicles data is not refined enough, and at the same time, it does not consider updates of the Internet of Vehicles data, that is, it does not consider the dynamic demand. In addition, the paper only considers two aspects of service level and related costs in the location model, and does not consider the model in multivariate situations such as different vehicle types and unavailable service requirements. Therefore, the model proposed in this paper has certain limitations. Based on the above problems, in the future, this research will study the maintenance demand location problem under dynamic data, and integrate multiple variables into the solution model to enhance the applicability of the location method. At the same time, in terms of algorithm performance, the performance of the algorithm can be improved by performing more refined processing of the initial data of the vehicle trajectory, or adjusting and testing the relevant parameters of the algorithm. The pseudo code of particle swarm to achieve the lowest total cost of repair station location is described as follows: N is the group size. procedure PSO for each particle i Initialize velocity V i and position X i for particle i Evaluate particle i and set Pbest i = X i end for gbest = min (Pbest i ) while not stop for i = l to N Update the velocity and position of particle i Evaluate particle i if fit (i) < fit (Best i ) Pbest i = X i if fit (Pbest i ) < fit (gbest) gbest = Pbest i : end for end while
v3-fos-license
2014-10-01T00:00:00.000Z
2005-04-01T00:00:00.000
9246012
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/1471-2482-5-7", "pdf_hash": "a3bef44b8f3820d397ce77604f7ac3f212af12cb", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:123", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4bce2ed91c50eb956917e8ad24c1270dbd63b18d", "year": 2005 }
pes2o/s2orc
Adequate symptom relief justifies hepatic resection for benign disease Background The purpose of this study was to evaluate the long-term results of partial liver resection for benign liver lesions. Methods All patients operated on for benign liver lesions from 1991 to 2002 were included. Information was retrieved from medical records, the hospital registration system and by a telephonic questionnaire. Results Twenty-eight patients with a median age of 41 years (17–71) were operated on (M/F ratio 5/23). The diagnosis was haemangioma in 8 patients, FNH in 6, HCA in 13 and angiomyolipoma in 1. Eight patients were known to have relevant co-morbidity. Median operating time was 207 minutes (45–360). The morbidity rate was 25% and no postoperative mortality was observed. Twenty-two patients (79%) had symptoms (mainly abdominal pain) prior to surgery. Twenty-five patients were reached for a questionnaire. The median follow up was 55 months (4–150). In 89% of patients preoperative symptoms had decreased or disappeared after surgery. Four patients developed late complications. Conclusion Long-term follow up after liver surgery for benign liver lesions shows considerable symptom relief and patient satisfaction. In addition to a correct indication these results justify major surgery with associated morbidity and mortality. Background Partial liver resection is an accepted treatment for primary and secondary malignancies of the liver. In experienced hands this operation is associated with mortality rates of less than 5% and morbidity of approximately 30% [1][2][3][4]. Unlike malignant liver tumours, the indication for resection of benign hepatic lesions, including haemangiomas, focal nodular hyperplasia (FNH) and hepatocellular ade-nomas (HCA) remains controversial [5][6][7]. Indications for resection of benign liver masses include: 1) severe or progressive symptoms, 2) uncertain diagnosis with a suspicion for malignancy, and 3) risk of haemorrhage or rupture. If possible, it is important to discern whether the presenting symptoms are due to the liver lesion detected, before proceeding with surgical intervention. Several studies have reported about the indication for surgery in various benign hepatic tumours. However, less is known about the long-term results of surgical treatment, particularly regarding symptom relief. The present study was undertaken to evaluate the longterm results of partial liver resections for benign liver lesions, with emphasis on the course of symptomatology, long-term complication rate, and patient satisfaction. Methods All patients treated by partial liver resection for benign lesions in the University Medical Centre Utrecht between January 1991 and December 2002 were included. Information about these patients was retrieved from medical records and the hospital registration system. Preoperative parameters consisted of age, sex, diagnosis, co-morbidity, presenting symptoms and indication for resection. In the preoperative work up we have routinely performed physical diagnostic investigation, ultrasound and computed tomography (CT) to exclude other pathology causing the symptoms. On indication additional gastroscopy or endoscopic retrograde cholangiopancreatography was performed. The indications for resection of a haemangioma were persisting symptoms and rapid growth. After exclusion of other pain aetiology a period of 3 months observation was allowed to asses the persistence of the symptoms. In case of FNH the indications for resection were persisting symptoms and to exclude malignancy. HCA's were resected if symptomatic or larger than 5 cm. Date of resection, extent of resection, number of perioperative blood transfusions and duration of resection were considered perioperative parameters. A major resection was defined as a resection of three segments or more. Perioperative blood transfusion was defined as at least one unit of packed cells infused within 24 hours after surgery. Documented postoperative parameters consisted of clinically relevant complications, postoperative mortality, duration of admission and follow up. Postoperative mortality was defined as in-hospital mortality. Long-term follow up was obtained by a telephonic questionnaire. Information was collected about the presenting symp-toms, the relief of these symptoms after surgery and the impact on physical and social activities. The limitations on physical and social activities as a result of the presenting symptoms were divided in severe, moderate and none. Moderate limitations were defined as limitations for activities at least once a week, while severely limited patients experienced daily restrictions. Symptom relief was defined as a decrease or absence of the presenting symptoms after surgery. Results A total of 28 patients were operated on for benign liver lesions. The diagnosis was a haemangioma in 8 patients (29%), FNH in 6 (21%), HCA in 13 (46%) and angiomyolipoma in 1 (4%). The group consisted of 23 female and 5 male patients with a median age of 41 years (range 17-71). Eight patients (29%) were known to have relevant comorbidity, including severe cardiac or pulmonary diseases, diabetes mellitus, hepatitis, liver fibrosis, adipositas and multiple sclerosis. Twenty-two patients were known to have symptoms prior to surgery (79%). The most frequent presenting symptom was upper abdominal pain (64%). Other presenting symptoms are shown in table 1. The most important indications for resection were symptoms and excluding malignancy (table 2). The symptoms mentioned in this table consisted of abdominal pain in all patients. Three patients with HCA presented with haemorrhage as a result of spontaneous rupture. One patient was immediately operated on. The other 2 patients were stabilized and resection was performed after the haematoma was resolved. One patient underwent extended left hemihepatectomy for an asymptomatic, but rapidly growing giant haemangioma (25 cm). This major resection was accompanied by massive intraoperative haemorrhage from direct venous hepatocaval branches (7500 ml). After the resection the transected surface kept oozing. Therefore haemostasis was obtained by packing with gauzes. About 36 hours later the gauzes were removed. Further postoperative recovery was uneventful. A second patient underwent reoperation for wound dehiscence. The median hospital stay was 9.5 days (range 5-39). Seven patients developed one or more postoperative complications resulting in a morbidity rate of 25% (table 4). No postoperative mortality was observed. During follow up one patient died as a result of a cause other than liver surgery. Three months after a left hemihepatectomy for a large tumour which turned out to be a angiomyolipoma this patient died of cerebral stroke. Of the remaining 27 patients we were able to contact 25 for a questionnaire (figure 1). The median follow up of the interviewed patients was 55 months (range 4-150). Six of the 25 interviewed patients had no symptoms prior to surgery and underwent resection because of an uncertain diagnosis of an incidentally discovered liver lesion. Before surgery 7 patients were severely limited in their physical activities as a result of the liver lesion, 5 were moderately limited and 13 were not limited. Considering social activities 4 patients were severely limited prior to surgery, 4 were moderately limited and 17 were not limited. After surgery the symptoms had decreased or disappeared in 17 of the 19 interviewed patients with preoperative symptoms (89%). In two patients the symptoms were unchanged. One patient underwent partial liver resection for an adenoma. After the resection the preoperative abdominal pain never decreased. In a second patient preoperative abdominal pain did not decrease after resection of a large haemangioma. In these two cases the preoperative symptoms were probably not related to the liver lesion. Four of the 25 interviewed patients had developed late complications as a result of the operation. These complications consisted of a hypertrophic scar (n = 2) and incisional hernia (n = 2). Twenty-three of 25 interviewed patients were satisfied with the result of the resection (92%). Discussion Symptom relief was observed in 89% of the patients (17/ 19), while in two patients the preoperative symptoms had not decreased. Only a few studies concerning long-term follow up of resections for benign liver tumours have been published. Terkivatan et al. described 80% symptom relief in surgically treated patients [5]. In this study surgery was only indicated in case of suspicion of malignancy, severe or increasing symptoms or a HCA larger than 5 cm. They compared this group retrospectively with patients treated conservatively, of whom 34% presented with symptoms, e.g. non-specific complaints of fatigue and mild abdominal pain considered unrelated to the tumour. During long-term follow up 87% symptom relief was observed in the group who was treated conservatively. Charny et al. registered 93% symptom regression in patients who underwent partial liver resection for benign liver tumours and 86% symptom regression in patients who were observed for symptoms considered unrelated to the tumour or treated for unrelated conditions [8]. Therefore partial liver resection for a symptomatic liver lesion should only be performed when symptoms are most likely related to the lesion. Because of the nature of benign liver tumours, clear indications are needed for partial liver resection, an operation associated with substantial postoperative morbidity and mortality. Indications for resection of a cavernous haemangioma of the liver are the development of complications, rapid growth, the presence of persisting symptoms or the need to establish a confident diagnosis. The potential for complications of a liver haemangioma (mainly rupture) is not an indication for resection of all liver haemangiomas. Spontaneous rupture of a haemangioma is infrequent and could be controlled with transcatheter hepatic artery embolization prior to resection [9]. As for rapid growth, we have operated on 1 patient with an asymptomatic, but rapidly growing haemangioma in our series. Little is known about the natural history of these large haemangiomas and resection can be very challenging, since they are at risk for massive intraoperative haemorrhage. In case of abdominal pain the main challenge is to determine whether these symptoms are due to the haemangioma or an associated condition [10][11][12]. Farges et al. described other pain aetiology in 54% of patients with symptomatic haemangiomas [7]. Gandolfi et al. observed only 7% symptomatic giant haemangiomas in their series [13]. Haemangiomas are not likely to cause diagnostic uncertainty. In our series we have not performed diagnostic resections for haemangiomas. Unlike FNH, HCA is often symptomatic and is noted for its spontaneous rupture and malignant transformation. Looking at the potential complications, HCA's with a diameter of more than 5 cm should be resected, while for smaller HCA's and FNH observation is justifiable [6,[14][15][16]. Increasing size on radiographic imaging during observation is an indication for resection. FNH and HCA occur predominantly in females and are associated with long-term contraceptive steroid use [17]. This medication should be stopped, when FNH or a small HCA is not treated surgically. In case of FNH and HCA the most frequent indication for resection is the uncertain diagnosis of a hepatic mass with suspicion for malignancy. In addition to ultrasound, CT and MRI, positron emission tomography has proved to be a helpful modality distinguishing between benign and malignant liver lesions [18,19]. On the other hand, the use of needle biopsy should be reserved for atypical cases, since the limited material is rarely sufficient to exclude malignancy. In our series diagnostic uncertainty accounted for operation in 11 of 20 resected patients and was, together with symptoms, the main indication for resection. The final diagnosis after pathological examination of resected specimens was FNH in 5 patients, HCA in 5 and angiomyolipoma in 1. All Figure 1 Patients summary HCA's were larger than 5 cm, but no malignancies were observed. Conclusion We have shown in this consecutive series that partial liver resection for benign disease is a very effective procedure to relief invalidating abdominal symptoms. Benign liver lesions should only be resected when symptoms are most likely related to the lesion. In experienced centres the resection can be performed with acceptable morbidity and low mortality.
v3-fos-license
2021-09-27T20:55:46.014Z
2021-07-22T00:00:00.000
237728529
{ "extfieldsofstudy": [ "Psychology" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://seer.ufs.br/index.php/revtee/article/download/16091/11968", "pdf_hash": "da8218855cd9b7fa4c034433e575545908f72f85", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:125", "s2fieldsofstudy": [ "Education" ], "sha1": "d78043e5dd52856d5c8fd5eb9d8967f48ee49d9d", "year": 2021 }
pes2o/s2orc
Analysis of the problem on preparing future primary school teachers for the organization of pupils’ labor training Análise do problema de preparar futuros professores da escola primária para a organização do treinamento para o trabalho dos alunos Análisis del problema de la preparación de los futuros profesores de primaria para la organización de la formación laboral de los alumnos Analysis of the problem on preparing future primary school teachers for the organization of pupils’ labor training. The preparation of future primary school teachers for organizing labor training of young learners in the educational process of high school has been analyzed in the article. The essence and structure of the “readiness of future teachers to organize labor training of young learners” phenomenon have been revealed. The preparation of future primary school teachers for the organization of labor training of young learners in the educational process of high school has been monitored. The problems of young learners’ labor training have been discussed, and the forms and methods for preparing future primary school teachers for the organization of labor training have been developed. Preparation as a general concept is interpreted as an organized, purposeful long-term educational process in different types of educational institutions whose ultimate aim is to achieve the readiness to carry out professional pedagogical activities in a particular specialty. Readiness of future primary school teachers to organize labor training is interpreted as a result of preparing students during their studies at a higher pedagogical institution, as the state of the future teacher who has mastered the system of knowledge on productive, project technologies. The formation of future primary school teachers’ readiness is not a spontaneous or involuntary process. This is a systematic and purposeful activity. RESUMEN Análisis del problema de la preparación de los futuros profesores de primaria para la organización de la formación laboral de los alumnos. En el artículo se analiza la preparación de los futuros profesores de primaria para la organización de la formación laboral de los jóvenes en el proceso educativo de secundaria. Se ha revelado la esencia y la estructura del fenómeno de la "disposición de los futuros profesores para organizar la formación laboral de los jóvenes estudiantes". Se ha monitoreado la preparación de los futuros maestros de primaria para la organización de la formación laboral de los jóvenes estudiantes en el proceso educativo de la escuela secundaria. Se han discutido los problemas de la formación laboral de los jóvenes y se han desarrollado las formas y métodos para preparar a los futuros maestros de escuela primaria para la organización de la formación laboral. La preparación como concepto general se interpreta como un proceso educativo organizado, intencionado a largo plazo en diferentes tipos de instituciones educativas cuyo fin último es lograr la disposición para realizar actividades pedagógicas profesionales en una especialidad determinada. La disposición de los futuros docentes de primaria para organizar la formación laboral se interpreta como resultado de la preparación de los estudiantes durante sus estudios en una institución pedagógica superior, como el estado del futuro docente que ha dominado el sistema de conocimientos sobre tecnologías productivas de proyectos. La formación de la preparación de los futuros maestros de escuela primaria no es un proceso espontáneo o involuntario. Esta es una actividad sistemática y con un propósito. INTRODUCTION In modern conditions, it is important to fully take into account new requirements for preparing teachers, including professionals of the new generation who have competitive qualification and should not only be good at typical pedagogical situations, but should also organize young learners' educational activities based on purposeful and methodologically reasonable use of concepts of the learner-centered and competence-based education. Such changes are an important means of innovative renewal of the national system of pedagogical education because they direct future primary school teachers to the vocational activities in the new conditions of school development. Current trends in the development of the national education system require theoretical substantiation and practical updating of the content and methods of professional preparation of teachers who work with children. When obtaining education in pedagogical higher educational institutions, a future primary school teacher must carefully prepare for young learners' labor training by mastering efficient approaches, technologies, and guidelines. Thus, the relevance of studying the organization of young learners' labor training in primary school and the lack of pedagogical studies that would carry out the scientific and theoretical analysis and summarize the experience of labor training in accordance with modern social transformations and economic development, as well as the need to overcome a number of contradictions between understanding the role of labor training in the formation of young learners' personality and the lack of provisions on the integral readiness for primary school teachers' vocational activity; opportunities of the educational process in primary school and the lack of developed methods of forming young learners' productive labor activity; the need in professional preparation of future primary school teachers for the productive and labor activity and the lack of proper methodological support for such preparation enable the authors to conclude that nowadays the pedagogical science does not fully reveal the historical and pedagogical aspect of the problem on teaching young learners, does not generalize the leading ideas of national and foreign teachers on this issue, does not reveal the content of primary education, as well as the practical experience of teaching children in primary schools. The hypothesis of the study is that the dominant motives of young learners determine a responsible attitude to learning and largely depend on the motives guiding a learner during labor training. The need in preparing primary school teachers in a higher educational institution as highly educated specialists capable of flexibly reformatting the direction and content of their own professional activities, selecting new forms, methods, and teaching aids has an impact on becoming and the formation of dominant motives of young learners during labor training. LITERATURE REVIEW The dominant imperatives of the new strategy of vocational training of specialists from the standpoint of the new philosophy of education have been recently clarified by S. Goncharenko Despite the undeniable achievements in the development of productive labor, the national science lacks holistic studies focused on the problem of preparing primary school teachers for labor training of young learners. The analysis of pedagogical references revealed a rather small number of studies in terms of reassessment and understanding of the historical experience related to the organization of young learners' labor training. At the same time, the practice of solving the issues on the organization and involvement of young learners in labor training in primary schools is of significant scientific value for developing theoretical and organizational-methodological principles of young learners' labor training in modern conditions. The purpose of the article is to analyze the preparation of future primary school teachers for the organization of young learners' labor training during the educational process in a higher educational institution. Objectives of the article: 1. To reveal the essence and structure of the "readiness of future teachers to organize labor training of young learners" phenomenon; 2. To monitor the preparation of future primary school teachers for the organization of young learners' labor training during the educational process in a higher educational institution; 3. To discuss the problems of the school subject "Labor training" and to develop forms and methods for preparing future primary school teachers to organize labor training. METHODS In order to reach the study objectives, the authors used the following theoretical methods: analysis (retrospective and comparative) of psychological and pedagogical references, generalization and classification of scientific data in philosophical, psychological and pedagogical, educational and methodical sources for defining the state and theoretical substantiation of key concepts and categories of the study on preparing future primary school teachers for the "Labor training" and "Labor" subjects. In order to study the quality of educational services of future professionals, the authors used the following empirical methods: diagnostic (questionnaires, surveys, and testing) methods for determining the results of academic performance, as well as monitoring of the preparation of future primary school teachers (Dubovitskaya, 2005;Delhaxhe, 2009). In order to carry out the diagnosis in educational institutions, a survey was conducted among 49 teachers. The survey was aimed at checking the quality of students' knowledge. The developed questionnaire is original. The nature of learning motivation was determined using the method of M. Matyukhina (1984). Trying to get objective results, the authors largely used the following methods: observations, discussions, and mini answers to questions, essays for a given topic, as well as questionnaires for students, parents, and teachers. The authors developed Questionnaire 1. Ideas about the essence of labor training and Questionnaire 2. The level of attitude to educational activities and self-control, as well as task-situations on organizing labor training (the questionnaires are original). RESULTS Vocational training is defined as a process of forming a specialist for one of the areas of labor activity related to mastering a certain type of occupation, profession (Delhaxhe, 2009, p. 222-223). Vocational training is not the same as vocational education. It is not synonymous with professional development and professional adaptation, either, although these processes are interrelated in the development of a specialist. It is a set of special knowledge, skills and abilities, qualities, practical experience and standards of behavior that provide the opportunity to work successfully in a particular profession. It is also a process of communicating relevant knowledge (Orlov, 2003, p. 12-13). Accordingly, in special psychological and pedagogical references, as well as in their dissertations, the authors often do not consider this category in detail, and use it for studying with other concepts. The latter is probably due to the fact that the concept of "readiness" is widely used, and its meaning seems to be quite clear. However, it is assumed that a special reference to the essence of this concept may more accurately define its content, interpretation in modern references, and this will help to determine its place in the study and ways to relate to other concepts. Vocational training will be interpreted as a process of acquiring knowledge, skills, and abilities that enable performing work in a particular field of activity. Psychologists consider preparedness as a special mental state that arises as a manifestation of a qualitative neoplasm in the structure of personality at a certain stage of its development. The analysis of the published studies proves that nowadays, there is a powerful information base on the essence and the content of preparing future teachers. At the same time, a number of issues related to preparing future teachers for the organization of labor training do not have a convincing and reasoned solution, although the need for this is increasingly realized in theory and in practice. Thus, the readiness of future primary school teachers to organize labor training is interpreted as a result of preparing students when studying in a higher pedagogical institution, as the state of the future teacher who has mastered the system of knowledge of productive, project technologies. The formation of future primary school teachers' readiness is not a spontaneous or involuntary process. This is a systematic and purposeful activity whose success is impossible without revealing "structural links" of the personality that are indicators of the students' readiness for the productive labor activity with young learners. The authors consider the preparation of future primary school teachers for professional pedagogical activity as a holistic educational and pedagogical process aimed at forming in students a system of necessary knowledge, skills, and abilities for productive work of young learners, providing them with methods and techniques to turn productive technologies to the project activities, as well as the formation of a professionally-focused creative personality of the future teacher. The research and experimental work were carried out at the Kryvyi Rih State Pedagogical University, the South Ukrainian Pedagogical University named after K. D. Ushynsky, and the Kirovograd State Pedagogical University named after V. Vynnychenko. Various types of studies covered 360 students, 49 higher school teachers, and 89 school teachers. According to the survey of teachers, only (14.2%) formulates the concept of labor training completely, clearly, and reasonably; 57.1% of the respondents identify labor training with work, and the rest of the teachers (28.7%) formulate the concept by giving specific examples. According to 85.8% of the respondents, the organization of labor training is an urgent need of a modern school, while 14.2% indicate that the organization of such training requires much time and effort, which does not always justify itself. The teachers indicated the following difficulties: the lack of learners' positive learning motivation -85.7%, which does not ensure the efficiency of performing tasks, the need in defining learners' individual characteristics and their typological grouping -42.8%, the organization of group work -71.4%, the organization of independent work -42.8%, and the lack of time for preparing materials -100%. The answers for the question What forms of learning organization, apart from the lesson, do you use? were as follows: 76.0% of the surveyed teachers use classes in the extended daycare groups, 18.0% use afterschool programs, 43.3% -game lessons (project defense, travel lessons), and 12.0% -educational excursions. It is possible to see that most teachers realize the need to introduce a set of various forms of learning adapted to school conditions and age characteristics of students into the educational process (various types of lessons, consultations, conferences, interviews, etc.) to overcome the leveling in the learning process. The trend of residual funding of education by the state caused the disability of many schools to update classrooms and buy new equipment. There is a lack of visual means. All this has a negative impact on the quality of education and reduces young learners' interest in lessons. This study confirmed this trend and showed that only 29.0% of the surveyed teachers rated the equipment of the educational process in their subject as good, 61.0% rated the equipment availability as satisfactory, and 10.0% rated it as "unsatisfactory rather than satisfactory." The nature of learning motivation was determined by using the method of M. Matyukhina (1984, p. 149-150) that allowed identifying the leading, dominant motives in the learners' motivation. All motives indicated in this method can be classified into broad social (motives of duty and responsibility, self-determination and self-improvement), narrow personal (well-being and prestige), educational and cognitive (related to the content and process of learning), and those for the trouble prevention. According to testing learners from 4-A (25 learners) and 4-B (25 learners), social motives dominated in the learners' learning motivation. This is 52% of the learners from 4-A and 48% of those from 4-B. Twelve percent and 8% of the learners from the fourth form have narrow personal motives in learning, respectively. The rest of the learners displayed the availability of educational and cognitive motives. This is 36% of the learners from 4-A and 44% of those from 4-B. In addition, as for the learners' educational and cognitive motives, those of the educational content predominate (28% of the learners from 4-A and 4-B). Thus, the learners under the study find it extremely important to be recognized by the teacher and parents, as well as to have trusting relationships with age mates and obtain the opportunity to get a high mark in the class. Subject to the proper preparation for school, children are usually optimistic about their school life. Emotionally, they are very vulnerable and easily share opinions about their emotional state with the people they love. Therefore, taking this into account, in order to study the young learners' psychological state, their satisfaction or dissatisfaction with their school life, parents of learners from forms 1 -4 were offered to complete the relevant questionnaire. The parents' answers were analyzed and classified into three groups: positive attitude, neutral attitude, and indifferent attitude. The results are visually presented in the table and in Fig. 1. Analysis of the problem on preparing future primary school teachers for the organization of pupils' labor training Figure 1. Young learners' attitude to learning The analysis of the table confirmed that a responsible attitude to learning largely depended on the motives guiding a learner during labor training. Thus, 46% of the learners from the second form, and only 28% of the learners from the fourth form have the positive attitude. Twenty percent of the learners from the fourth form have the indifferent attitude. Thirty-nine percent of the learners form the second form and 50% of the learners from the third form have the neutral attitude. Parents were asked to give mini answers to questions asked by primary school teachers. Below are the examples of questions and answers. The parents often answered: a good and kind person to the question "Who do I see my child in the future?" Answering the second question "What does my child like to do most of all?" the parents wrote that their child liked drawing, helping around the house, dancing, and communicating with friends. Answers to the third question "What does your child value in people most of all?" were almost the same and specified common human qualities: kindness, honesty, decency, obligation, understanding, and gratitude. The most common answers to the fourth question "What does my child dream of?" were a new toy, a bicycle, and a soccer ball. Most of the respondents answered positively to the fifth question "Do I help enough in my child's studies?" The analysis of the obtained answers gives valuable information about the motivation of learning dynamics and tendencies about the development of a certain learner's attitude to learning, labor activity, and school. At the ascertaining stage of the experiment, an exhibition of learners' drawings was organized, and the drawings were evaluated according to certain criteria: aesthetics and originality. The children learned to analyze their work and form the ability to think critically and constructively. The authors studied the works of each child and created presentations of drawings. At the formative stage, the exhibitions of works were individual and were evaluated by using game forms and techniques (children's jury, "invent a fairy tale", and "an amulet for our city"). The results of studying these stages of the experimental work are shown in the table and in Fig. 2 According to the table, only 16.67% of the children have a high level, and 50% of them have a low level, which clearly shows that the children's drawings need to be improved and the teaching methods of primary school teachers need to be developed. In order to identify the gaps of students when they organize learners' labor training, a comprehensive survey was carried out. The following questionnaires were developed: Questionnaire 1. Ideas about the essence of labor training; Questionnaire 2. The level of attitude to educational activities and self-control, as well as task-situations on organizing labor training. The survey revealed difficulties in understanding the essence of educational activities in their organizational and cognitive aspects by most students. In particular, only 18% of the surveyed students from the experimental group and 21.8% of the students from the control group consider it necessary to have a formed and aware concept about the educational activities as an important condition for successful learning of future primary school teachers. It is assumed that 59% and 60% Analysis of the children's works (drawings) by the following criteria: aesthetics, originality 1 High 2 Sufficient 3 Low of the first-year students have it, respectively. However, a clear, consistent, correct, and detailed answer to this question was given by only 23% of the first-year students from the experimental group and by 31.2% of the students from the control group; 18% and 18.2% of the students made inaccuracies, respectively; the rest of the answers can be attributed to tautologies ("Yes", 42% of the students from the experimental group and 41.2% of the students from the control group admit that self-control causes excessive nervousness and insecure actions; 11% of the students from both groups believe that self-control does not play any role in learning, and only 40% of the first-year students from the experimental group and 42.7% of the students from the control group say that self-control helps to optimize their activities). All the students surveyed need systematic work on developing skills on the organization of labor training. No student considers such work superfluous, which indicates serious adaptive problems the first-year students have ("this is the main activity of students", "this is to work at in the classroom", etc.). Summing up the results of the survey, it is possible to say that, firstly, the authors did not reveal any special differences in the content and problems of the educational activities of the students from two experimental groups. Secondly, only 30% of the students, i.e., each third, feel like a subject of the educational activity and try to form full-fledged partnerships with other participants in the educational process -students, teachers, tutors, etc. Thirdly, only 40% of the students feel the need to form their own learning activities and such an important element as selfcontrol. Fourthly, according to 42% of the students, in their activities, the teacher's control dominates over the self-control, and this circumstance prevents them from taking responsibility for learning results and self-management in educational activities. Fifthly, 62% of the respondents find it rather difficult to set goals, plan, analyze, adjust and carry out self-control in educational activities. DISCUSSION Labor training in primary school is an important link in the system of subjects aimed at the comprehensive harmonious development of learners. It aims at developing the personality by involving learners in creative work, forming a constructive approach to solving work problems. Labor training is an important means of comprehensive development of primary school learners if it is planned correctly, taking into account the learners' age and physiological characteristics. The experimental studies carried out by doctors have shown that the alternation of practical work in creative workshops with classroom activities increases the efficiency of primary school learners and has a positive effect on the development of practical knowledge and skills. During labor training classes, learners' physical activity is combined with mental activity, because learners have to solve a number of creative tasks (product design, development of step-by-step technology of their production, etc.). At the same time, young learners use their knowledge of the basics of science and materials technology, as well as acquire new knowledge. Thus, labor training comes with the intense mental activity, which helps to raise the learners' intellectual abilities. Textbooks on labor training present such components as "Designing movable models from cardboard and paper", "Story cutout", "Origami", "Production of three-dimensional artificial flowers", "Collage", "Work with plasticine", "Beadwork", "Work with modern artificial materials", "Weaving", "Embroidery", "Volume application of fabric and buttons", "Volume figures of wire", "Decorative panel", "Art decoration and design", and "Self-service. Clothing and footwear", "Papiermâché", "Excursions", "Safety rules", and "Creative workshop", whose content confirms the compliance with the model of the learning process at a certain stage of primary school development in the labor training lesson. The program of labor training in primary school focuses on the formation of children's constructive approach to solving labor problems and provides for the learners' consistent involvement in the technical creativity when making a variety of products from natural materials, paper, fabric, wood, metal, plastics, and waste materials. In particular, the program includes the "Man and technology" module where children learn about the world of technical professions, master the techniques of modeling and design by using parts from the "Designer" and "Architect" sets and various materials on the sample, technical drawing, and own design. The current state of education necessitates the training of primary school teachers in higher educational establishments as highly educated specialists capable of flexible reformatting of the direction and content of their own professional activities, the selection of new forms, methods and teaching aids. Nowadays, in the primary education there are many innovations that are one way or another related to the young learners' intellectual development and, accordingly, require high professional culture from a teacher. Thus, the analysis of existing programs and guidelines for primary school, the analysis of the content of scientific conferences and the bank of pedagogical ideas enabled the authors to focus on pedagogical technologies that were experimentally tested in many regions of Ukraine and gave positive results in the learners' intellectual development. The means of training future teachers for the organization of learners' labor training in the educational process of primary school is a set of educational and methodological support, including work programs of general and special disciplines ("General pedagogy", "Methods of labor training", "Organization of project activities"), methodical manuals, methodical instructions, workbooks, etc. These are examples of themes for individual tasks: "Teacher of the Ukrainian national school", "Methods of preparing students for lectures, seminars and practical classes", "Formation of students' research skills", "Requirements for pedagogical communication", "Methods of searching for learners' labor training in the primary school educational process", "The modern primary school teacher's image", "Working capacity and conditions to maintain it", "Nonverbal means of the teacher's impact on a primary school learner", "Student's rights and responsibilities", "Future teacher as a researcher of pedagogical phenomena and processes", "Requirements for the personality of a primary school teacher", "The ideal teacher: who is he/she?". The production of a wall newspaper on pedagogical topics can be a possible project. During their pedagogical practice, second-year students were offered a task, e.g., to write an article: "Family portrait". The families who can share their experience on preparing a child for school are invited to school. Such work with parents is preceded by the preparatory work. The efficient work on preparing a child for school is facilitated by the differentiation of pedagogical comprehensive education of parents that involves taking into account its main aspects (social, demographic, ethnographic, and ideological). In the games, children selected pictures according to the task. These were mostly travel games: "Journey to the books exhibition "Fairytale Heroes", "Long Journeys", "Journey around the city". The pictures were placed in a hall or in another room, where the "travelers" had to go, which gave them certain independence, increased their interest in the game, and enriched the game actions. Pronouncing the counting rhyme, the children were divided into several groups (four, five), according to the tasks of the game. Each group that included two children was given a task: one child had to look at the vegetables grown in the field and select the relevant pictures, the second child had to choose the fruits growing in the garden, and the third one was to name pets, etc. Other children were waiting for "the travelers" who, when returning, placed the pictures on the stands and talked about them, naming the group the animal belonged, i.e., they operated with generalized concepts. The next group included such games as "Tell me everything you know about the object". The children were given "pictures-letters" to be "read". These were such games as "School", "Sports", "Shop", "Vegetable and fruit market", "Guess", "Puzzles", "Story games", "Car", etc. Repeating the game, the students gave the children two "picture sheets" depicting objects and offered to tell what they looked like and what differences they had, as well as name the group they belonged to. The games of this type helped to consolidate the children's ability to analyze, synthesize, combine objects into one group based on comparison and finding common or similar features. The students were offered probable project themes: "Methods of research on productive work", "Subject and essence of scientific activity", "Sources of scientific information on labor training and their use in scientific work", "Students' research work", "Basic forms of implementing research results", "Research of the students' organization of labor activity during various forms of education". Thus, the suggested variety of forms and methods of work had a positive impact on the formation of a positive attitude to learning and indicators of preparing learners for labor training. CONCLUSION Based on the analysis of scientific references, the readiness of future primary school teachers to organize labor training is interpreted as a result of preparing students during their studies at a higher pedagogical institution, as the state of the future teacher who has mastered the system of knowledge of productive, project technologies. The formation of future primary school teachers' readiness is not a spontaneous or involuntary process. This is a systematic and purposeful activity whose success is impossible without revealing "structural links" of the personality that are indicators of the students' readiness for the productive labor activity with young learners. The survey of young learners made it possible to determine what children were most interested in and what kind of work they liked. The analysis of the obtained data shows that if there is free time, only 8% of the learners like making souvenirs from various materials (cardboard, natural materials, textiles), being active in labor lessons. The other 29% of the young learners are rarely active. They spend an hour every other day for their favorite activity. This has given valuable material about the motivation of the learning dynamics and tendencies about the development of a certain learner's attitude to the labor activity. Analyzing the students' productive labor activity, the authors noticed that only 12% of the students from the control group and 11% of the students from the experimental group worked at the handiworks themselves and creatively. The other students worked using samples and illustrations. According to the analysis of various students' works, it has been revealed that a small number of future primary school teachers are able to make and artistically decorate handiworks from various materials. Thus, the hypothesis of the study has been proved: the dominant motives of young learners determine a responsible attitude to learning and largely depend on the motives guiding a learner during labor training. The need in preparing primary school teachers in a higher educational institution as highly educated specialists capable of flexibly reformatting the direction and content of their own professional activities, selecting new forms, methods, and teaching aids has an impact on becoming and the formation of dominant motives of young learners during labor training. The learners' work during the formation of the national education system was characterized by the adaptation to the new market conditions of management where its organization was subject to the requirements of self-financing. An important condition for the efficient productive work in secondary schools is the fruitful work and pedagogical skills of their leaders and teachers of labor training who use the experience of teaching methods to ensure the organization of learners' work in the educational process at a high professional and pedagogical level. The further work can be continued by developing pedagogical conditions for improving productive work of future primary school teachers.
v3-fos-license
2021-07-26T00:06:28.327Z
2021-06-04T00:00:00.000
236226403
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00158-021-02944-w.pdf", "pdf_hash": "09328130b045490e2d9ff4e12b001c206bb9a57c", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:126", "s2fieldsofstudy": [ "Engineering" ], "sha1": "b849b5da8456263975201f96c5b226ae94095bca", "year": 2021 }
pes2o/s2orc
In silico biomechanical design of the metal frame of transcatheter aortic valves: multi-objective shape and cross-sectional size optimization Transcatheter aortic valve (TAV) implantation has become an established alternative to open-hearth surgical valve replacement. Current research aims to improve the treatment safety and extend the range of eligible patients. In this regard, computational modeling is a valuable tool to address these challenges, supporting the design phase by evaluating and optimizing the mechanical performance of the implanted device. In this study, a computational framework is presented for the shape and cross-sectional size optimization of TAV frames. Finite element analyses of TAV implantation were performed in idealized aortic root models with and without calcifications, implementing a mesh-morphing procedure to parametrize the TAV frame. The pullout force magnitude, peak maximum principal stress within the aortic wall, and contact pressure in the left ventricular outflow tract were defined as objectives of the optimization problem to evaluate the device mechanical performance. Design of experiment coupled with surrogate modeling was used to define an approximate relationship between the objectives and the TAV frame parameters. Surrogate models were interrogated within a fixed design space and multi-objective design optimization was conducted. The investigation of the parameter combinations within the design space allowed the successful identification of optimized TAV frame geometries, suited to either a single or groups of aortic root anatomies. The optimization framework was efficient, resulting in TAV frame designs with improved mechanical performance, ultimately leading to enhanced procedural outcomes and reduced costs associated with the device iterative development cycle. Introduction Transcatheter aortic valve (TAV) implantation has become an established clinical procedure that provides a minimally invasive alternative to open heart surgical valve replacement in medium-to high-risk elderly patients with calcific aortic valve disease and severe aortic stenosis (Tabata et al. 2019). Currently, there are approximately 180,000 potential candidates for TAV replacement in the European Union and in Northern America annually, expecting an increase in number in the next years (Durko et al. 2018). Due to its minimally invasive approach and ongoing success, TAV replacement could become the standard treatment also for low-risk patients (Howard et al. 2019), leading to a fast expansion of new TAV designs (Fanning et al. 2013). TAVs are generally composed by a bioprosthetic valve sutured on a metal frame (or stent) and can be grouped into balloon-expandable and self-expandable valves, featured by a stainless-steel and Nitinol frame, respectively (Dasi et al. 2017). TAVs are designed to be crimped into a catheter, delivered during the implantation procedure through the aorta and placed on the patient's diseased aortic valve to restore its native functionality (Jones et al. 2017). Considerable technological advances were conducted to improve the performance and safety of TAVs, although several complications still affect the potential of the treatment and are becoming of 1 more concern with the expansion to younger and lower-risk patients (De Biase et al. 2018). The most common complications affecting the current generation of TAV devices include postoperative paravalvular leak (PVL), conduction abnormalities, and valve thrombosis (Rotman et al. 2018). Additionally, aortic root damage and prosthesis migration, which are typically associated with the mutual interaction between the TAV and the aortic root, may occur (Neragi-Miandoab and Michler 2013). The design of a TAV frame is a challenging task as it involves the fulfillment of multiple requirements. From a biomechanical viewpoint, TAVs should: (1) assure proper apposition to avoid PVL (Wang et al. 2012;Morganti et al. 2014;Tanaka et al. 2018), (2) generate low contact pressures to exclude conduction abnormalities (Rocatello et al. 2019), (3) produce low stress within the aortic root to limit the tissue damage (Auricchio et al. 2014;Morganti et al. 2014;Wang et al. 2015;Finotello et al. 2017;McGee et al. 2019a), and (4) exert an adequate pullout force to prevent valve migration (Mummert et al. 2013;Tzamtzis et al. 2013;McGee et al. 2019b). Among the arsenal of available tools supporting not only the design phase of TAVs, with the evaluation and optimization of their mechanical performance (Dasi et al. 2017), but also the definition of the optimal TAV implantation procedure (Schultz et al. 2016), in silico modeling (Luraghi et al. 2021), mainly based on the finite element (FE) method, assures the achievement of effective results with reduced time and costs as compared to a pure experimental approach. In particular, FE modeling represents the elective tool for achieving the optimization of the TAV frame geometry. Recently, a computational framework based on patient-specific aortic root models was proposed to successfully optimize the geometry of a commercial self-expandable TAV frame (Rocatello et al. 2019). However, the study was limited to the optimization of the TAV inflow portion. In order to improve the effectiveness of the optimization procedure, all the parameters associated with the overall TAV frame geometry should be considered, aiming to provide a comprehensive understanding of the relation between the TAV design and the post-procedural outcomes. In the present study, a multi-objective shape optimization framework is presented, based on FE modeling of TAV frame implantation in an idealized aortic root anatomical model. The final goal is to contribute to improve the mechanical behavior of a TAV frame by using an approach that concurrently assures (1) reduced costs associated with the device iterative development cycle and (2) improved post-procedural outcomes. Technically, the design of the experiment method coupled with surrogate modeling was adopted here to explore the biomechanical interactions between different TAV designs and the aortic root. This allowed to define approximate relationships between optimization objectives, associated to postoperative complications, and design parameters of the entire TAV frame geometry, ultimately leading to the identification of optimal TAV frame designs from the biomechanical viewpoint. Methods The procedure applied for shape and cross-sectional size optimization of a TAV frame consisted of the following main steps ( Fig. 1): (1) FE modeling of TAV implantation procedure including an idealized aortic root model and a conventional Nitinol TAV frame model parametrized through a mesh-morphing procedure, (2) formulation of the optimization problem through the definition of the optimization objectives and feasible solution space, (3) coupling the design of the experiment method with the surrogate modeling approach to define an approximate relationship between optimization objectives and design parameters, and (4) identification of the optimal geometric attributes of the TAV frame. Each step of the present workflow is detailed in the following subsections. Aortic root model An idealized FE model of the human aortic root including a portion of the ascending aorta, the left ventricular outflow tract (LVOT), and the native aortic valve leaflets was created using Hypermesh (Altair Engineering, Troy, MI, USA) in conjunction with Abaqus/Standard (Dassault Systèmes Simulia Corp., Johnston, RI, USA) (Fig. 2a). The anatomical features of the model were based on previous studies (Labrosse et al. 2006;Auricchio et al. 2011;Formato et al. 2018). Specifically, the aortic anulus and LVOT diameters were set to 25 mm and the ascending aorta diameter to 30 mm. Additional dimensions are reported in Fig. 1-Suppl. A homogeneous thickness of 1.5 and 0.5 mm was assigned to the aortic root and the leaflets, respectively. Calcific and non-calcific aortic root models were generated. Calcifications were modeled as idealized structures. Based on previous experimental findings (Thubrikar et al. 1986), the geometrical pattern of human calcific deposits was classified into two main categories here referred as pattern I, characterized by an arc shape located along the leaflet coaptation line (Fig. 2b), and pattern II, arc shaped and located along the leaflet attachment line (Fig. 2c) (Thubrikar et al. 1986;Luraghi et al. 2020). Maximum thickness value (5 mm) and volume (1100 mm 3 ) of the idealized calcifications were set based on data from patients suffering from aortic stenosis (Sturla et al. 2016;Pawade et al. 2018). An isotropic, incompressible hyperelastic Mooney-Rivlin material model (Table 1) was adopted to describe the mechanical behavior of the aortic root (Auricchio et al. 2011;Gunning et al. 2014). An elasto-plastic material model with perfect plasticity ( Table 1) was employed to model the mechanical behavior of the calcific deposits (Bosi et al. 2018). The aortic root and the leaflets were discretized using four-node shell elements with reduced integration S4R, as justified by their thin geometry with constant thickness (Bosi et al. 2018) and calcifications were meshed using four-node tetrahedral elements C3D4 (Morganti et al. 2016;Bosi et al. 2018;McGee et al. 2019a). Tied contact was modeled between leaflets and the aortic root, as well as between leaflets and calcium deposits (Ovcharenko et al. 2016;Bosi et al. 2018). The following three different scenarios were investigated: (1) aortic root without calcifications (in the following referred to as healthy configuration) (Fig. 2a), (2) diseased aortic root presenting pattern I calcifications (in the following referred to as diseased I configuration) (Fig. 2b), and (3) diseased aortic root presenting pattern II calcifications (in the following referred to as diseased II configuration) (Fig. 2c). TAV frame model A FE element model resembling the 29-mm CoreValve TAV (Medtronic, Dublin, Ireland) was created ( Fig. 2d) using Hypermesh and Abaqus/Standard. The model was simplified by considering only a Nitinol frame composed by 30 strings and neglecting the porcine pericardial tissue valve, which has a marginal structural role (Bailey et al. 2016). Shape and dimensions of the TAV frame were retrieved from the manufacturer data-sheet and from literature (Rocatello et al. 2019). The mechanical behavior of the Nitinol alloy was described implementing in Abaqus a super-elastic constitutive model (Auricchio and Taylor 1997) through a user-defined subroutine. The material parameters, retrieved from a previous study (Morganti et al. 2016), are summarized in Table 2. In accordance with previous computational studies (Gessat et al. 2014;Hopf et al. 2017;Rocatello et al. 2018Rocatello et al. , 2019, the TAV frame geometry was meshed using B31 Timoshenko beam elements, defining a local coordinate system for each element to properly orient the beam cross-section. This modeling choice was motivated not only by the device geometry, composed of slender strings, but also by the necessary compromise between computational efficiency and adequate accuracy of the results, as well as by the suitability of these elements to parametrized stent models generation (Hall and Kasper 2006). Parametrization of the TAV frame model The nominal geometry of the TAV frame was parametrized as a combination of four different morphing shapes (i.e., shapes 1-4, Fig. 3) by using a mesh-morphing approach in Hypermesh. Moreover, thickness and width of the beam cross-section were accounted as two additional size parameters. The design space of the six parameters was defined by properly setting a search space for each morphing shape (according to a normalized shape factor sf) and size parameter. To do that, preliminary studies were conducted to identify degenerate geometries and convergence issues in the FE analyses of TAV deployment for the implemented shape combinations (Li et al. 2009;Wu et al. 2010). The four mesh-morphing shapes were introduced to parametrize the whole TAV frame geometry in terms of radial and axial dimensions and, at the same time, to consider separately the morphing of the upper and lower parts of the TAV frame with two independent parameters. More in detail, shapes 1 and 3 varied the total height of the frame and the radial position of all the nodes, according to a shape factor sf 1 and sf 3 within the ranges [0, 1] and [−1, 1], respectively (Fig. 3). design parameters sampling and implementation of surrogate models of the optimization objective; and (4) identification of optimal TAV frame candidates Table 1 Aortic root and calcium deposits material parameters (Auricchio et al. 2011;Gunning et al. 2014;Bosi et al. 2018 Shapes 2 e 4 were defined by folding the TAV frame geometry to a cylinder with a diameter equal to the minimum TAV frame diameter, varying the top and bottom of the TAV frame diameter, respectively, according to a shape factor sf 2 and sf 4 with range [−1, 1] (Fig. 3). In addition to the four shapes, the thickness and width of the TAV frame string cross-section (nominal values of 0.25 and 0.45 mm) were varied within ranges [0.15 mm, 0.35 mm] and [0.35 mm, 0.55 mm], respectively. The links between the strings were assumed to have the same string thickness and a width equal to 1.5-fold the string width. Finite element analyses of TAV implantation FE analyses of the TAV implantation procedure were performed using the implicit code Abaqus/Standard to solve the non-linear equations of static equilibrium on 6 computing cores of a workstation equipped with Intel® Core™ i7-8700 and 32 GB RAM. Two procedural steps were simulated (Table 3). In the first step, the TAV frame insertion into the catheter capsule, modeled as two concentric rigid cylindrical surfaces (Cabrera et al. 2017), was carried out (Fig. 4a, b, Video 1-Suppl). In the second step, the device was released and placed in contact with the aortic root ( Fig. 4c, d, Video 1-Suppl). Due to the angular symmetry of the model (see Fig. 2), only one third of the aortic root, frame, and catheter were modeled and symmetry boundary condition were applied accordingly. Interactions between the parts were implemented with the contact-pair algorithm, based on a master-slave approach, considering the default "hard" normal contact behavior with a friction coefficient of 0.09 for the TAV frame/aortic root and TAV frame/catheter capsule interaction, and a friction coefficient of 0.36 for the frame/calcium and aortic root/calcium (McGee et al. 2019b). Artificial damping was added to stabilize the non-linear simulations, controlling that the ratio between the related dissipation energy and total internal energy was less than 5% (Abaqus 2016). Nodes at the lower extremity of the TAV frame were positioned 4 mm under the aortic anulus plane (Fig. 4c), in accordance with the procedural guidelines defined by the device manufacturer (Medtronic 2014) and were constrained in the vessel axis direction, as well as nodes on the upper and lower edges of the aortic root (Fig. 4c). In the crimping simulation step, the external rigid cylinder was radially crimped from a diameter of 100 to 6 mm, while the internal cylinder remained fixed to a diameter of 5 mm, in accordance with the catheter capsule dimensions provided by the device manufacturer (Medtronic 2014). In the release simulation step, the external cylinder was released to its initial diameter (i.e., 100 mm) to allow for the contact between the TAV frame and the aortic root model. A mesh independence analysis was carried out before the execution of the optimization study by progressively doubling Fig. 2 Idealized FE models of the human aortic root and of the TAV frame. a "Healthy" aortic root model without calcific deposits, b and c "Diseased I" and "diseased II" aortic root models, respectively, according to the idealized calcium patterns I and II (Thubrikar et al. 1986). d TAV frame the mesh element size. The results were assumed mesh independent when the difference between the solution of two consecutive mesh refinements was less than 3% in terms of pullout force magnitude exerted by the device and peak maximum principal stress within the aortic wall, and less than 10% in terms of peak contact pressure. As a result, a mesh cardinality of 23,408 S4R shell elements for the aortic root, 2110 B31 beam elements for the TAV frame, 20,764 SFM3D4R surface elements for the catheter, 23,266 and 35,967 C3D4 tetrahedral elements for the calcification patterns I and II, respectively, were adopted. Optimization objectives and constraints The optimization of the biomechanical performance of the device and the related effectiveness of the TAV implantation procedure consisted in minimizing the risk of migration, tissue damage, and conduction abnormalities associated with TAV replacement and, hence, involved the maximization of the pullout force magnitude (defined as the resultant of the normal contact forces acting on all nodes of the TAV frame multiplied by their corresponding friction coefficient), the minimization of the peak maximum principal stress and peak contact pressure, respectively. In detail, the optimization problem was performed according to the following constraints, which defined the feasible solution space: (1) The pullout force magnitude exerted by the device should be greater than 6.5 N, a value proposed as the lower limit for avoiding the migration of the device (Mummert et al. 2013;Tzamtzis et al. 2013;McGee et al. 2019b). (2) The peak value of the maximum principal stress within the tissue, considered as a measure of the risk of damage of the aortic root tissue (Auricchio et al. 2014;Morganti et al. 2014;Wang et al. 2015;Finotello et al. 2017;McGee et al. 2019a), should be lower than 2.5 MPa, which has been proposed as the material limit for the occurrence of tissue tearing (Wang et al. 2015). (3) The peak value of the contact pressure in the atrioventricular conduction system, considered as a measure of the risk for rhythm disturbances , should be lower than 0.43 MPa, which has been proposed as the upper limit value for the occurrence of conduction abnormalities . It must be noted that, differently from previous studies where the atrioventricular conduction system was located in the LVOT of patient-specific models using computed tomography data, in the present study, the atrioventricular conduction system location could not be precisely defined as an idealized aortic root model was investigated. For this reason, the contact pressure was conservatively computed in the entire LVOT. Additionally, the risk of PVL, which occurs when the device is not completely in contact with the aortic root at the level of the LVOT, was accounted for by applying a further constraint to the peak contact pressure objective: peak contact pressure values under the anulus plane should be greater than zero, to guarantee the contact between the TAV frame and the aortic root wall. Therefore, solutions with non-null contact pressure in the LVOT are desirable to avoid PVL. Summarizing, the present optimization problem can be mathematically formulated as: where pullout force f PF , peak maximum principal stress f MPS , and peak contact pressure f CP are the optimization objectives; x is the vector of the design parameters; D is the design space; sf 1 , sf 2 , sf 3 , sf 4 are the shape factors associated to the meshmorphing shapes of the TAV frame model, and the thickness t and width w are the cross-sectional size parameters. Surrogate modeling Separate surrogate models were constructed for each objective within the multi-objective optimization framework. The central composite design (circumscribed) sampling strategy (Draper and Lin 1996) was implemented in Hyperstudy (Altair Engineering, Troy, MI, USA), running 77 sampling FE simulations of TAV implantation for each aortic root configuration. The number of samples was defined by the following formula (Draper and Lin 1996): where k is the number of design parameters (k = 6) and n 0 is the number of center points (n 0 = 1). On the conducted simulations, optimization objectives were computed and the output data were exported in Matlab (Mathworks, Natick, MA, USA). Gauss process surrogate models (Rasmussen and Williams 2018) were adopted in Matlab to define an approximate relationship between the six design parameters and the optimization objectives. This combination of sampling strategy and surrogate model was selected after a preliminary analysis that compared the central composite design and Gauss process surrogate model against other combinations involving the Latin hypercube sampling strategy (McKay et al. 1979) and the polynomial surrogate model (Draper and Lin 1996), which were previously used for the design optimization of endovascular devices (Li and Wang 2013;Clune et al. 2014;Bressloff et al. 2016;Alaimo et al. 2017;Rocatello et al. 2019). Details about this preliminary study are reported in the Supplementary materials. The validity of the models was assessed with the leave-one-out principle, plotting each predicted value in function of the simulated value and evaluating the overall validation error in terms of predicted coefficient of determination R 2 pred . Furthermore, a consistency check was performed by verifying that computed standardized cross validated residual (SCVR) values lied within the [−3, 3] range (Jones et al. 1998;Pant et al. 2012). Multi-objective optimization The selection of the optimal TAV frame geometry, even after obtaining the surrogate models and defining the feasible solution space, is not straightforward and several approaches can be applied to finalize the optimization process (Pant et al. 2011). In this study, the multi-objective optimization problem was initially considered as unconstrained. The constrains were applied in a second stage to identify optimal candidate geometries within the feasible solution space. In detail, two alternative approaches were adopted. First, a conservative-based approach was applied, in which optimal candidates were considered to remain in the middle region of the feasible solution space defined by the objective constraints, thereby avoiding poor performance of the device in any of the objectives. Specifically, the surrogate models were used to predict the objectives for each possible combination of design parameters x within a discretized design space (six parameters, eight samples within each parameter range). Three margins of safety, each related to one objective and based on its constraint values, were defined as: where MS PF (x), MS MPS (x), and MS CP (x) are the margins of safety related to pullout force, peak maximum principal stress, and peak contact pressure, respectively. Then, an overall margin of safety MS(x) was computed as: Among all the combinations of design parameters, the one that guaranteed the largest overall margin of safety MS(x) was conservatively identified as optimal TAV frame candidate. This approach was applied twice (1) considering separately the healthy, diseased I, and diseased II configurations, to identify the optimal TAV frame geometry for the specific anatomy, and (2) considering simultaneously the two "diseased" configurations, to identify an optimal TAV frame geometry implantable in a wider range of diseased anatomies. Hence, four optimal TAV frame geometry candidates were searched. Secondly, an approach based on Pareto optimality was applied to generate sets of optimal candidate geometries, ensuring high flexibility over the device design process, for which multiple desirable characteristics (e.g., hemodynamics features, manufacturing-related aspects, reduced costs) have to be considered in addition to the mechanical performance. Accordingly, the non-dominated sorting genetic algorithm (NSGA-II) (Deb et al. 2002), one of the most popular, reliable, fast sorting, and elite multi-objective genetic algorithm, suitable to identify Pareto-optimal solutions (Yusoff et al. 2011), was used in the Matlab environment. Technically, a population of 200 individuals, a binary tournament selection, a crossover fraction of 0.8, and a Gaussian mutation were chosen for the multi-objective optimization problem, considering a maximum number of 600 generations. Sets of non-dominatedoptimal solutions were identified for the three objectives, so that an improvement in one objective could only be the result of the worsening of at least one of the other objectives. Subsequently, feasible solutions were identified within the Pareto front by considering the objectives constraint, according to the approach already proposed in the past for stent design optimization (Pant et al. 2011). Due to the high computational costs required, consistency check of the Pareto front was conducted by performing the corresponding FE analyses for three Pareto-optimal solutions for each aortic root configuration, selected at the extremes and at the center of the Pareto front. Predicted objective values were plotted in function of the corresponding simulated values, checking the feasibility and optimality of the solutions. Figure 5 shows the simulation outputs related to the three optimization objectives of interest in the case of the nominal TAV frame geometry virtually implanted in the three different aortic root configurations. Considering the healthy configuration (Fig. 5a, left panel), normal contact forces were mostly exerted from the aortic root to the TAV frame in the LVOT, generating a total pullout force magnitude equal to 2.3 N. Differently, for both diseased I and diseased II configurations (Fig. 5a, central and right panels), normal contact forces were mainly exerted between the TAV frame and calcium deposits, resulting in a total pullout force magnitude equal to 16.9 and 19.5 N, respectively. The peak maximum principal stress within the aortic root was lower in absence of calcification (0.56 MPa vs. 1.91 MPa and 1.70 MPa for the healthy and diseased I and II configurations, respectively) (Fig. 5b). The peak contact pressure in the LVOT was higher in the healthy case than the diseased ones (0.66 MPa vs. 0.56 MPa and 0 MPa for the healthy and diseased I and II configurations, respectively) (Fig. 5c). The null value of contact pressure occurring in the diseased II configuration indicated the absence of contact between the TAV frame and LVOT, revealing the presence of a minimum gap area of 7 mm 2 . According to a previous study reporting a correlation between the minimum cross-section gap area between the frame and the aortic anulus with the PVL volume (Tanaka et al. 2018), the computed minimum gap area of 7 mm 2 corresponded to a mild-tomoderate PVL. Objective functions trade-offs Based on the 77 simulations samples and for each aortic root configuration, Fig. 6 summarizes the nature of the relationship between optimization objectives. According to the formulated optimization problem, the conflict between pullout force magnitude vs. peak maximum principal stress, and pullout force magnitude vs. peak contact pressure was observable for all the three configurations. This means that high pullout force magnitude values are effective in avoiding TAV migration, but they could lead to excessive peak maximum principal stress and peak contact pressure values, which are associated with an increased risk for tissue damage and conduction abnormalities, respectively (Wang et al. 2015;Dasi et al. 2017;Rocatello et al. 2019). The sample points related to the healthy configuration presented, on average, peak maximum principal stress and pullout force magnitude values lower than the "diseased" configurations ( Fig. 6a, b). Furthermore, in the healthy configuration, the low generated pullout force magnitude and high peak contact pressures made sample points lied outside of the feasible solution space (Fig. 6b, transparent gray region), in contrast with the diseased configurations sample points. Thirty-two and 56 sample points related to the diseased I and II configurations, respectively, were characterized by null contact pressures (Fig. 6b, c), implying that the corresponding TAV frames were subjected to PVL when implanted in the calcified aortic root. In those cases, the associated minimum gap area was equal to 2.5 and 7.0 mm 2 for the diseased I and II configurations, respectively, highlighting the presence of a mild aortic regurgitation grade (Tanaka et al. 2018). Surrogate model validation The results of the preliminary analysis conducted to select an adequate combination of sampling strategy and surrogate models are reported in the Supplementary Materials. The combination of central composite design and Gauss process surrogate model emerged as the most effective to the aims of this study, and in the following all presented results are referred to this combination. The output of the validation process of surrogate models, based on the leave-one-out principle, is summarized in Fig. 7, in which pullout force magnitude, peak maximum principal stresses, and peak contact pressures values predicted by the Gauss process vs. the corresponding simulated values are presented. The excellent agreement between predicted and simulated objective values (in the case of both healthy and diseased configurations) is confirmed by the strong direct proportionality of the data points, well aligned with the identity line, and by the very high values of the coefficient of determination (R 2 pred > 0.94) (Table 4). Moreover, nearly all SCVR values of the predicted objectives lied within the required interval [−3, 3] (Pant et al. 2012) (Fig. 8), indicating the validity of the Gauss process surrogate models. Geometry parameters exploration The validated surrogate models were used to investigate the impact of each design parameter on the optimization objectives by varying two design parameters at time while maintaining the others fixed at the nominal value, as shown in Fig. 9. The string cross-section parameters (i.e., thickness and width) and shape 3, which was related to the TAV frame overall radial dimension (Fig. 3), had major impact on pullout force magnitude, in particular in the case of the diseased configurations, where high values of those parameters were associated with high pullout force magnitude (Fig. 9a, upper and central panels). Differently, shape 1, which was related to the (Fig. 3), had negligible impact on the pullout force magnitude (Fig. 9a, central panel). Shapes 2 and 4, which were related to the upper and lower parts of the TAV frame (Fig. 3), respectively, had an impact on the pullout force magnitude only for the diseased cases (Fig. 9a, bottom panel). In all aortic root configurations, string width and shape 3 design parameters had major impact on the peak maximum principal stress (with the highest parameters value associated with the highest peak maximum principal stress values, Fig. 9b upper and central panel). Conversely, string thickness and shape 1 design parameters had a marginal impact on peak maximum principal stress values (Fig. 9b, upper and central panels). As for shape 2 and shape 4, those parameters had some impact on the peak maximum principal stress only in the case of healthy and diseased I configurations (Fig. 9b, bottom panel). Thickness, width, shape 1, and shape 4 had a considerable impact on the peak contact pressure in the LVOT of both healthy and diseased I aortic root configurations (Fig. 9c). Shape 3 had some impact only on the healthy configuration (Fig. 9c, central panel), while shape 2 did not markedly influence the healthy and diseased I configurations (Fig. 9c, bottom panel). None of the design parameters influenced the peak contact pressure of the diseased II configuration, which was always equal to zero independent of the design parameter combination (Fig. 9c), highlighting that in this case PVL could not be avoided by just combining two of the six design parameters of interest. Choice of the optimal TAV frame geometry The optimized TAV frame geometries and corresponding string cross-section values, obtained using the conservative constraint-based approach, are presented in Fig. 10 overlapped to the nominal geometry for the healthy (Fig. 10a, optimized 1), diseased I (Fig. 10b, optimized 2), and disease II (Fig. 10c, optimized 3) configurations, and considering the two diseased configurations simultaneously (Fig. 10d, optimized 4). The objective values of the optimized TAV frame geometries are summarized in Table 5 and compared to those related to the nominal geometry. In the case of the healthy configuration (i.e., optimized 1 TAV frame geometry), no feasible solution was generated, although a 137% beneficial increase of the pullout force magnitude and a 25% beneficial decrease of peak contact pressure was obtained with respect to the nominal geometry while maintaining the peak maximum principal stress still within the aortic root material limit (i.e., < 2.5 MPa (Wang et al. 2015)). Conversely, in the case of diseased configurations, optimized 2 and optimized 3 candidates identified objective values within the feasible solution space. In detail, a beneficial outcome in terms of peak maximum principal stress and peak contact pressure was obtained, with pullout force magnitude lower than those of the nominal geometry but still conservatively higher than the minimum acceptable value of 6.5 N (Mummert et al. 2013;McGee et al. 2019b). Moreover, different from the corresponding values characterizing the nominal geometry, peak contact pressure values belonging to the feasible solution space (i.e., > 0 MPa and < 0.43 MPa )) were obtained. Considering the optimized 4 TAV frame geometry (i.e., the one associated with the two diseased aortic root configurations simultaneously), all quantities defined here as objective values fell within the feasible space, although with a reduced margin of safety with respect to the geometries optimized 2 and optimized 3. The sets of non-dominated optimal solutions lying on the Pareto front of the three objectives pullout force magnitude, peak contact pressure, and peak maximum principal stress are presented in the plane of pullout force magnitude vs. peak maximum principal stress in Fig. 11. All the Pareto-optimal solutions of the healthy aortic root configuration were found to be located outside of the feasible solution space (Fig. 11a). Conversely, acceptable Pareto-optimal solutions were present in the case of the diseased configurations (Fig. 11b, c). Fig. 7 Leave-one-out predicted values of the a pullout force magnitude, b peak maximum principal stress, c peak contact pressure in function of the corresponding simulated values, for the "healthy", "diseased I", and "diseased II" aortic root configurations However, differently from the diseased I configuration, a remarkable number of points of the diseased II configuration falling within the admissible region of peak maximum principal stress and pullout force magnitude exhibited unfeasible contact pressure values (Fig. 11). The output of the consistency check of the Pareto front is presented in Fig. 12, which shows the predicted vs. simulated objective values for the selected Pareto-optimal solutions for each aortic root configuration. The three solutions were selected within the feasible space for the diseased configurations, whereas outside the feasible space for the healthy configuration. The consistency between simulated and predicted values was observable, with a direct proportionality of the data points, well aligned with the identity line, in particular in case of the healthy and diseased I aortic root configurations for all the objectives and of the diseased II configuration for the pullout force magnitude and peak maximum principal stress. Discussion The application of computational modeling tools to the design process of endovascular devices, in particular in the initial proof-of-concept and prototyping phases, has been gaining a dramatically growing interest from the medical device industry (Morrison et al. 2017(Morrison et al. , 2018. Computational simulations can facilitate the design, optimization, and development phases of medical devices (Morrison et al. 2017), reducing the number of prototypes to be manufactured and the experimental tests, with positive impact on the product development Fig. 8 SCRV values of the a pullout force magnitude, b peak maximum principal stress, and c peak contact pressure in function of the corresponding simulated values, for the "healthy", "diseased I", and "diseased II" aortic root configurations cycle time and costs. Within this context, the present work proposes a computational framework for the shape and crosssectional size optimization of TAV frames based on FE analysis of TAV implantation in idealized aortic root models. Several approaches have been proposed where computer models were applied to endovascular devices shape optimization. While the majority of these works were focused on coronary and peripheral stents (Li et al. 2009Wu et al. 2010;Pant et al. 2011Pant et al. , 2012Azaouzi et al. 2013;Li and Wang 2013;Clune et al. 2014;Tammareddi et al. 2016;Alaimo et al. 2017), little attention has been paid to TAVs until now. Technical characteristics and novelty items of the proposed optimization framework In a recent study, TAV shape optimization was focused on the inflow portion of the valve frame, considering two specific geometrical parameters of that region, i.e., the valve diameter at 4 mm above the ventricular inflow section and the height of the first row of the frame cells at the ventricular inflow (Rocatello et al. 2019). In contrast, here, six parameters associated with the overall TAV frame geometry were considered for design optimization purposes, thus providing a comprehensive picture of the overall impact of frame geometric Fig. 9 Predicted values of the a pullout force magnitude, b peak maximum principal stress, and c peak contact pressure, by varying two design parameters at the time, for the most relevant combinations, while maintaining the others fixed at the nominal value, for the "healthy", "diseased I", and "diseased II" aortic root configurations attributes on TAV implantation procedural effectiveness. Furthermore, as a novelty of this study, here the implicit FE analysis was implemented in the optimization framework, taking advantage of its being unconditionally stable, not requiring small time step size and mass scaling to guarantee an accurate and stable solution. To perform the number of simulations required by the proposed optimization framework (n = 77 for each aortic root configuration), a computationally efficient FE model of TAV implantation was defined adopting 1D and 2D elements for the TAV frame and the aortic root and proper model simplifications in addition to the implicit FE solver. The run time of each sample simulation was~40,~75, and~80 min on 6 computing cores of a local workstation, depending on the healthy, diseased I, and diseased II aortic root configuration, respectively. Ideally, all simulations required by the central composite design sampling strategy could be simultaneously performed on a large computing cluster, obtaining the optimization results in less than 2 h. Computational efficiency was also demonstrated when comparing the run time to the previous TAV frame optimization study (Rocatello et al. 2019), where the average run time for each sample simulation was 53 min on a cluster equipped with 16 computing cores (4.0 GHz and 63.0 GB RAM for each node). Consistency check of the optimization framework The simulations of the impact of TAV implantation satisfactorily agreed in terms of optimization objectives with data reported by the literature. In particular, a marked dependency of the results with respect to the presence of calcium deposits was observed, according to previous findings (Sturla et al. 2016). The observed dependence of the obtained pullout force magnitude values on the aortic configuration agreed with previous findings (Wang et al. 2012(Wang et al. , 2015 reporting values considerably higher in diseased aortic root configurations, as compared to the healthy one (Table 5). The here observed high pullout force magnitude values in the presence of calcium deposits were ascribable to the combined effect of the adopted value of friction coefficient between the TAV frame Fig. 10 Optimized TAV frame geometries, overlapped to the nominal geometry (in gray), and the corresponding string cross-section values for the a "healthy" ("optimized 1"), b "diseased I" ("optimized 2"), and c "diseased II" ("optimized 3") configurations, and for d both "diseased" configurations simultaneously ("optimized 4") Fig. 11 Pareto-optimal solutions of the three objectives for the a "healthy", b "diseased I", and c "diseased II" aortic root configurations. Points were represented in the plane of pullout force magnitude vs. peak maximum principal stress and were colored according to the values of peak contact pressure. Feasible values of pullout force magnitude and peak maximum principal stress were identified within a transparent gray region. Constraints of the objectives are indicated as a dotted line, illustrating the feasible solution space as a transparent gray region and the calcium deposits, and to the normal contact forces related to the reduced TAV frame expansion (Fig. 5a). Indeed, previous studies suggested that calcium deposits help the TAV frame anchoring (McGee et al. 2019b) and that additional oversizing of the device should be accounted for to avoid migration issues in the absence of calcifications (Mummert et al. 2013). In addition, the observed peak maximum principal stress values were higher in the case of diseased configurations as compared to the healthy one (Table 5). The explanation for this lies in the presence of calcium deposits, pushed by the TAV frame against the aortic root wall (Fig. 5b). Moreover, peak maximum principal stress values depended both on calcium deposit shape and position. Overall, the reported peak maximum principal stress values were similar to those reported in previous FE studies of TAV implantation Wang et al. 2015;McGee et al. 2019a). Peak contact pressure values in the diseased configurations were lower than those in the healthy one (Table 5, Fig. 5c). This was attributable to a dependence on the calcium pattern, as well as to the fact that contact forces were mainly exerted between TAV frame and calcium deposits, instead of the aortic root wall (Fig. 5a). Indeed, the zero peak contact pressure reported for the diseased II configuration was related to the absence of contact between the TAV frame and the LVOT (Fig. 5c), indicating the possible occurrence of PVL. Analysis of surrogate models After a preliminary analysis that compared the coupling of different sampling strategies and surrogate models (see Supplementary Materials), the combination of central composite design with Gauss process surrogate model was successfully used, enabling the definition of an approximate relationship between the optimization objectives and the TAV design parameters. In this regard, a first analysis conducted by varying two design parameters at a time while maintaining the others fixed at the nominal value ( Fig. 9) allowed to clarify the impact of the design parameters on the predicted objective values and, consequently, to make decisions on the TAV frame design based on the predicted mechanical behavior. However, this approach involved a limited search of the design space and parameter exploration. Hence, all parameter combinations within the design space were also investigated using two alternative approaches to identify optimal candidates starting from the nominal geometry of the TAV frame, namely (1) a conservative constraint-based approach and (2) an approach based on Pareto optimality. The two approaches were applied to all the aortic root configurations. In particular, the healthy configuration was included into the analysis as it represents the extreme configuration without calcification, useful in the prospective that TAV replacement could become an important treatment option also for low-risk patients (Howard et al. 2019), presenting with very low calcium deposits volume. The conservative constraint-based approach successfully led to one optimized TAV frame shape for each specific aortic root configuration (i.e., optimized I, II, and III geometries) and one optimized shape for both diseased configurations (i.e., optimized IV geometry) ( Table 5, Fig. 10). Based on this approach, no feasible solution emerged for the healthy configuration (Table 5), suggesting the occurrence of possible prosthesispatient mismatch in the absence of calcifications and indicating the need to further extend the initial design space to account for additional device oversizing. Conversely, the other optimized geometries based on diseased configurations were identified within the feasible solution space (Table 5), revealing the ability of our approach to obtain optimal candidates within the design space, fitted to specific or grouped diseased anatomies. The approach based on Pareto optimality led to a set of optimal TAV Fig. 12 Predicted values of a pullout force magnitude, b peak maximum principal stress, and c peak contact pressure, in function of the corresponding simulated values, for three Pareto-optimal solutions of each aortic root configuration (i.e., "healthy", "diseased I", and "diseased II" configuration). Constraints of the objectives are indicated as a dotted line, illustrating the feasible solution space as a transparent gray region frame geometry candidates for each aortic root configuration (Fig. 11). Thereafter, design candidates could be selected based on additional features related to, e.g., hemodynamics, manufacturing-related aspects, the interface with the sutured prosthetic valve and costs, ultimately ensuring high flexibility over the device design process. Limitations and future perspectives This study presents some limitations that might weaken the effectiveness of the proposed optimization procedure. Idealized FE models of the aortic root were considered. In particular, the heterogeneous composition and thickness of the arterial wall and native leaflets was neglected. The shape of the calcium deposits was assumed as an arc shape structure with a homogenous, isotropic material. Moreover, further investigations could be conducted to improve the efficiency and robustness of the optimization process in terms of computational time and accuracy (Giselle Fernández-Godino et al. 2019), evaluating the use of more advanced optimization algorithms. Despite the limitations, the optimization framework proved to be effective in identifying appropriate TAV frame shapes and several advances could be implemented to further extend its potential. In detail, aortic root models with different dimension/shape could be investigated to derive an optimized TAV frame geometry suitable for a range of anatomical sizes. Other anatomical features, such as the aortic root eccentricity or different calcification patterns, could be analyzed. Furthermore, the optimization framework could be used to improve other aspects related to the TAV mechanical performance such as the Nitinol material parameters, string thickness distribution, and device implantation positioning. Different TAV frame designs could be investigated as well. Finally, FE simulations could be coupled with computation fluid dynamics simulations, following device implantation, in order to address TAV implantation procedural complications such as PVL (De Jaegere et al. 2016;Mao et al. 2018;Rocatello et al. 2019) and thrombosis (Bianchi et al. 2019;Nappi et al. 2020). Conclusions In this work, a computational framework for the shape and cross-sectional size optimization of TAV frames was proposed. FE analyses of TAV deployment were performed in three idealized different aortic root models representing a healthy (i.e., without calcium) and two diseased (i.e., with calcium deposits) scenarios. Three biomechanical quantities (i.e., pullout force magnitude, peak maximum principal stress within the aortic wall, and peak contact pressure in the LVOT) were defined as objectives of the optimization problem to evaluate the TAV frame mechanical performance. By defining a fixed design space and implementing surrogate models related to the optimization objectives, the geometrical parameters of the TAV frame were explored to improve its mechanical performance. Thereafter, optimized frame geometries were successfully identified, for both single and groups of anatomies, ultimately resulting in improved procedural outcomes and reduced time and costs associated with the device iterative development cycle. The optimization framework provided enough flexibility to be extended on further studies accounting for different aortic root anatomies, additional design parameters, and other TAV devices. Code availability Commercial software (i.e., Matlab, HyperMesh, and Abaqus) were used to perform all FE analyses and post-process the results. Declarations Conflict of interest The authors declare that they have no conflict of interest. Replication of results The necessary information for replication of the results, including the geometric, material, and meshing data of the models, and the simulation settings, is presented in this paper. The interested reader may contact the corresponding author for further implementation details. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
v3-fos-license
2019-02-19T14:07:07.730Z
2019-03-21T00:00:00.000
70027916
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40751-019-00051-6.pdf", "pdf_hash": "f859e0b1aeda688748dba8ac5da8ef106beb5e1f", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:127", "s2fieldsofstudy": [ "Computer Science", "Education" ], "sha1": "05827f49e06435176c809c9c4e18c154bba0f5c4", "year": 2019 }
pes2o/s2orc
Relations Between Task Design and Students’ Utilization of GeoGebra This study contributes insights into how task design with different elements of guidance may influence students’ utilization of dynamic software for problem solving and reasoning. It compared students’ solving of two tasks with different designs supported by the dynamic software GeoGebra. Data analysed examined students’ approaches to utilizing GeoGebra, the characteristics of their reasoning and their ability to prove the validity of their solutions after solving the problems. The results showed that students who solved the task with less guidance (without instructions about a specific solving method) were better able to utilize GeoGebra’s potential to support their reasoning and problem solving. These students reasoned more creatively and presented more advanced proofs for their solutions than the more guided ones. potential of the software. For example, will the design of a given task influence students' ways of using the software? Non-routine tasks that students do not have a method for solving, and who are therefore invited to construct (at least parts of) their own method, are beneficial for learning (Brousseau 1997;Jonsson et al. 2014;Kapur and Bielaczyc 2012), as is the requirement that students determine their own way through the solution (Stein et al. 2008). Other studies into the kind of guidance students need to solve non-routine tasks through creating means for solving have included providing hints for getting started, suggesting solving strategies, offering instructions for creating appropriate representations, giving information about important concepts, templates, etc. (Kapur and Bielaczyc 2012). However, some researchers doubt the value of those kinds of support, arguing that, at best, the students will understand the mathematics embedded in the guidance and, at worst, they will just follow the guidance and reach the solution without gaining much (if any) mathematical understanding (Hmelo-Silver et al. 2007;Kapur and Bielaczyc 2012). Despite research into the potential of dynamic software to support problem solving and reasoning, it is not yet clear how task design may affect students' utilization of the software. In our previous studies (Granberg and Olsson 2015;Granberg 2016;Olsson 2018), we designed tasks to encourage the use of GeoGebra features to support reasoning and problem solving. In these studies, in which the users were GeoGebra novices and the teacher interactions were limited merely to providing instructions for how to use it, we found that guiding students to use general formulas and strategies did not prevent them from engaging in productive reasoning. However, further investigations are needed to clarify whether it is the task design or GeoGebra itself that encourages student engagement in reasoning and problem solving. Therefore, for this study, we designed a task with more specific guidance to compare students' use of GeoGebra between the previous task and the more guided task. The given tasks require the construction of a mathematical rule and the research question is: BWhich different methods, if any, do students use to draw on the potential of GeoGebra to support problem solving and reasoning when working with non-routine tasks with different degrees of guidance?B ackground The key foci of the research question are GeoGebra's potential to support problem solving and reasoning, and the influence of task design. However, there are at least 20 years of research on different geometric software (e.g. Cabri and Sketchpad) relevant to this study. Therefore, this section begins with a description of earlier research into the potential of dynamic software to support problem solving and reasoning, and concludes with an overview of research into learning about mathematical functions with the use of GeoGebra. Dynamic Software Supporting Problem Solving and Reasoning To construct solving methods and engage in reasoning, students often need support from concrete representations, such as drawings and figures (Preiner 2008). If the problem at hand concerns linear functions, students might need to draw and manipulate graphs to explore, for example, the relations between the graphical and algebraic representations of functions. Students might also need tools to investigate the mathematical properties of the visualized representations (Leung 2011). Actively exploring and investigating the properties of linear functions with graphs and illustrations could be easier to do with dynamic software than with pen and paper. Dynamic software has been shown to support students' problem solving by allowing them to construct graphs simply by entering an algebraic formula (Sedig and Sumner 2006). GeoGebra can also transfer changes from one representation to another. Anything altered in the algebraic representation is automatically adjusted in the graph, and vice versa. Once entered, an object can be precisely adjusted in both representations simultaneously (Preiner 2008). Other system tools such as zoom in and out, construction protocols, angle measurements, and hide and reveal representations support students' problem solving by inviting them to explore and investigate various mathematical properties, processes and relationships (Berger 2011). Dynamic software such as GeoGebra also creates opportunities for students to consider the mathematics behind the construction of graphs and algebraic expressions (Mariotti 2000), which they may use to justify claims and construct solutions. The ease of creating and adjusting multiple representations supports informal reasoning as an alternative to, or as a way to develop, more formal logical reasoning (Barwise and Etchemendy 1998;Jones 2000). Questions like BHow can we increase the slope of graph x?^can be explored by constructing, manipulating and comparing several graphs using dynamic software, instead of interpreting just one or two graphs drawn on paper. If these attempts to solve the problem are supported by a prediction of the outcome, they may enhance students' reasoning and justification of their solutions (Hollebrands 2007;Olsson 2018). Designing Tasks for Problem Solving and Reasoning Tasks in mathematics education are generally intended to enhance specific mathematical skills and competencies. However, research frequently notes that textbook tasks often focus on results and answers, which in turn leads to emphasizing procedural skills rather than competencies like problem solving and reasoning (Hiebert and Grouws 2007;Jonassen 2000). Even with tasks explicitly categorized as 'problems' or 'nonroutine', students are often provided with guidelines for constructing appropriate representations of the problem: that is, students do not need to consider which representations are appropriate to support the solution. Instead, they may directly implement the provided instructions to solve the given task (Jonassen 2000). Nonroutine tasks aimed at engaging students in problem solving are also often underpinned with scaffolding, feedback and instructions to secure performance success (Kapur and Bielaczyc 2012). This may affect the way students engage in reasoning. The use of technology may demand new prerequisites for designing tasks aimed at problem solving and reasoning. The U.S. National Council of Teachers of Mathematics (NCTM 2009) suggests that using technology to link multiple representations of mathematical objects dynamically provides students with opportunities to engage in mathematical sense-making. Such tasks should engage students in problem solving (Santos-Trigo et al. 2016) and invite students to visualize objects and explore dynamic variations, by investigating, for example, the areas of triangles with different shapes and a given perimeter. Leung (2011) argues that for students to benefit from dynamic software, the given tasks should invite them to explore, reconstruct and explain mathematical concepts and relations. He presents a design model for mathematical tasks involving technology. Three modes are introduced: Practice Mode (PM), Critical Discernment Mode (CDM) and Situated Discourse Mode (SDM). These modes mirror the development from learning how to use a tool to gradually realizing the knowledge potential that is embedded in it. Tasks aimed at mathematical exploration may be designed based on these modes in order to encourage interaction between learners and a learning environment including technology. When approaching a task engaging in PM, the learner needs to establish practices for utilizing the tools for exploration. A scheme for how to use certain tools systematically to achieve a certain purpose will be developed. PM can involve either constructing mathematical objects or manipulating pre-designed objects. In this mode, the learner will develop skills and behaviour for interacting with the tool. The next mode, CDM, builds on PM and arises when learners need to make a critical judgement regarding how to implement the tools in order to observe patterns and invariants. In this mode, the empirical experiences from PM are mathematized: CDM is a precursor to SDM, which serves as a connection to theoretical reasoning and a bridge to formal mathematics. These three modes may be understood as a nested sequence, in the sense that CDM is a cognitive extension of PM, and SDM a cognitive extension of CDM. To support the different modes when designing tasks, Leung suggests that they should involve conjecturing and providing explanations. Practices will evolve into discernment, while discernment brings about reasoning. Learning About Functions Using GeoGebra To teach the concept of function, lessons must include opportunities to represent, discuss and interpret features of, language involved with and information derived from functions (Carlson 1998). The nature of the tasks, therefore, has a great impact on students' learning about functions. Oehrtman et al. (2008) argue that a strong emphasis on procedural ability is not effective in developing a deep understanding of the notion of function. Instead, the teaching must start in and develop from students' earlier experiences, with the formal definition of functions arising as a conclusion from the learning activity (Vinner and Dreyfus 1989). GeoGebra's features allow different approaches to investigating linear functions. Not only can it draw a submitted algebraic expression as a graph (Hohenwarter and Jones 2007), one that can be changed by manipulating the expression, it can also derive an algebraic representation from a graph created in the drawing section. The user may then drag and change the graphic representation and observe changes in the algebraic representation (Karadag and McDougall 2011). The x-coefficient and the constant term may also be connected to sliders through which the user can easily change the appearance of the graph and explore its relation to the particular algebraic expression. Tasks may be designed to use these features of GeoGebra to encourage students to examine relations between, and properties of, representations of linear functions. Gómez-Chacón and Kuzniak (2015) found that students often had difficulties using software tools, which prevented them from constructing the necessary features or links to solve mathematical tasks. In most cases, students needed teacher support to overcome those challenges. If such teacher support does not reveal a solution method, the pedagogical use of software tools can become part of the teaching of mathematical concepts (Leung 2011). Therefore, the design should generate tasks in which software tools are likely to be used. In our previous studies, which used Task 1 in this study (see Fig. 1, below), we found it was possible to instruct students in how to manage GeoGebra without revealing how to solve the task. Short instructions and the availability of the researcher during student activity in relation to the task were sufficient for students to use it to construct solutions to non-routine tasks. The results showed that students engaged in deep reasoning that was supported by GeoGebra. Framework The research question guiding this study asked in which ways students' utilization of GeoGebra was different when solving non-routine tasks with different levels of guidance. It was considered that to support an analysis aiming to answer this research question, a specification of this software's potential to support problem solving and reasoning was required. Furthermore, using it should be interpretable in students' uttered reasoning, and therefore this latter's character is important to explain differences in its utilization. To deepen the analysis, the aspect of proof was considered. Signs of proof were drawn from students' uttered reasoning as were supported by GeoGebra features. Therefore, to examine the students' utilization of GeoGebra, a conceptual framework was constructed by using the following components: 1) a specification of the potential of GeoGebra to support problem solving and reasoning; 2) definitions of imitative and creative reasoning, which have been found to lead to different types of learning and to be enhanced differently by the character of guidance in tasks; 3) categorisation of different types of proof expressed in reasoning, which may be used to explore the variety of proof in the context of the study. In the context of this study, and with respect to the participants' ages, proof is not related to strict logical proofs. Instead, proof is regarded as students' ability to convince themselves and others of the truth in their solutions. Balacheff's (1988) distinction between pragmatic and intellectual proofs will be used to analyse the diversity of students' proofs for their solutions and methods related to different task design. Points 1 to 3 (above) form the conceptual basis of the analytic method and will be outlined in the following sub-sections. As this study is following up on an earlier quantitative study, in which students who had worked on Task 1 in a post-test significantly outperformed students who had worked on Task 2, both tasks were used again here without change. The Potential of GeoGebra to Support Problem Solving and Reasoning Our previous studies have shown students often need to refer to and compare mathematical representations to engage in reasoning and to be successful problem-solvers (Granberg and Olsson 2015;Olsson 2018). GeoGebra has the potential to provide students with dynamic, algebraic and graphical representations of mathematical objects. It has been found that students use GeoGebra tools to explore and investigate these objects' relations and properties (Preiner 2008;Bu and Schoen 2011;Santos-Trigo et al. 2016). Based on our experiences and literature, in the context of this study (presented below in the BMethod^Section), GeoGebra's potential to support problem solving and reasoning is specified in terms of: & offering a quick and exact dynamically linked transformation and display of representations of mathematical objects that could be utilized to explore relations between the representations; & offering tools (e.g. to measure, read, organize, and step back and forth through a task-solving session) that could be drawn upon to investigate specific mathematical properties of the visualized representation. Student use of the potential of GeoGebra was investigated by having them solve tasks using two different designs (see Figs. 1 and 2). Their utilization of it was analysed through interpreting the levels of reasoning observed in their computer actions and conversations. The conceptual basis used for reasoning is outlined in the next subsection. Imitative and Creative Reasoning In mathematics education literature, reasoning is equated with strict proof by some authors, while others define it less narrowly (Balacheff 1988;Ball and Bass 2003). Students in secondary school are introduced to mathematical proofs, but, in everyday classes, they more often perform less strict forms of reasoning. Acceptable reasoning is fostered by the expectations and demands of teachers, textbooks and curricula (Yackel and Hanna 2003). Lithner (2008) describes reasoning as a line of thought adopted to produce assertions and conclusions in order to solve tasks. Inspired by Pólya's (1954) concept of 'plausible reasoning' (distinguishing a more reasonable guess from a less reasonable one), Lithner defined Creative Mathematically-founded Reasoning (CMR) as coming up with original methods for solving mathematical problems, supported by arguments anchored in intrinsic mathematics for prediction and verification. CMR is not necessarily logically strict, but it is constructive through its support for plausible arguments. Reasoning that is not based on plausible arguments can be imitative, based mainly on given or recalled facts and procedures. Different kinds of imitative reasoning are often suitable for solving routine tasks. Memorized reasoning (MR) relies on recalling an answer (e.g. every step of a proof or facts such as 1m 3 = 1000 dm 3 ), but more often tasks require procedural calculations (e.g. Bdraw the graph corresponding to y = 3x − 3^). In those cases, algorithmic reasoning (AR)recalling a set procedure that will solve the taskis usually more suitable. One variant of AR particularly relevant to this study is Guided AR, reasoning based on a given procedure supplied by a person or a text. Reasoning can be seen as a thinking process and/or the product of thinking processes, and it is the latter that we can observe as data (e.g. written solutions, oral speech and computer actions), which may be used to examine the type or level of their reasoning while solving problems. The design of this study (which will be presented in detail in the BMethod^Section) builds on students solving tasks in pairs with the support of GeoGebra, and their reasoning being visible in their conversations and activity using GeoGebra. Students' reasoning will be characterized through Lithner's framework of imitative and creative reasoning. Four Types of Proof Students may solve tasks equally successfully, but with a variety of types of proof supporting the solution. Balacheff (1988) proposed using qualities of students' proofs for their solutions and methods to explore the variety of proof through solving a task. In this study, his framework of pragmatic and intellectual proofs, as outlined in the next sub-section, was used to characterize students' expressed proofs. He distinguishes students' forms of proof in mathematical practices from strict logical proofs. Student proofs for their solutions are often connected to their own activity generated while solving the problem. A further difference in student justifications is between pragmatic proofs and intellectual proofs. Pragmatic proofs are directly linked to the students' task-solving actions and are a matter of showing that their method or conjecture works. Intellectual proofs, on the other hand, rest on formulations of, and relations between, propertiesthey are concerned with giving reasons for its truth. Pragmatic proofs can be categorized as either naïve empiricism or the crucial experiment, and intellectual proofs as the generic example or the thought experiment. These four categories are presented in detail below. Naïve Empiricism This is the very first step in the process of generalization. It makes assertions about truth of a conjecture based on specific examples. For instance, for the question BIs the sum of two odd numbers always even?^, a student performs a number of additions (e.g. 1 + 1, 3 + 3, 1 + 7, 5 + 11, etc.) and, based on this data alone (each result being specifically even), claims that all sums of two odd numbers are even. The Crucial Experiment Originally, the expression referred to an experiment designed to prove whether one or another of two hypotheses was valid. Balacheff used the term to represent the assertion of truth based on solving one additional instance of the task, and assuming and asserting that if the conjecture worked for this instance, it would consequently always work for similar instances. For example, a student has the same question as in the example above. Having explored several additions of two odd numbers, with all being even, she states that it is not feasible to try all possible additions. She then decides to try an addition with larger numbers, e.g. 123 + 235, and, based on this specific result (358) being even, claims that all sums of two odd numbers are even. The difference between this and naïve empiricism is that here the student explicitly poses the problem of generalization. The Generic Example It is a characteristic representative of a class of objects which allows for treating a particular instance without losing generality and the student has some awareness of the generality of the specific procedure being used. The difference from the crucial experiment is that this type of proof considers the properties of the entire class and their relations. So, if she looks at 5 + 7 and says, 5 + 7 = (4 + 1) + (6 + 1) = (4 + 6) + (1 + 1) = 10 + 2, which is the sum of two even numbers (and the second one will always be 2) and any other example will work exactly the same way. (This relies on the sum of any two even numbers being even.) The thought experiment Justification by thought experiment is further detached from a particular representation than the generic example. For instance (same example as above), a student may argue that any odd number is one more than an even number, so adding any two odd numbers is adding two even numbers (which are always even) and then adding 2 to it, which will still keep it even. (This might be seen as a verbal version of (2 m + 1) + (2n + 1) = 2 m + 2n + 2 = 2(m + n + 1).) Design of the Tasks Lithner (2017) proposes that a task engaging students in CMR must: (1) be reasonable for the particular student to construct a solution (creative challenge) and (2) be reasonable with respect to the student's mathematical resources for articulating arguments for the solution method and the solution (justification and conceptual challenge). These principles build on the ideas of Brousseau (1997), that mathematical learning takes place when students have the responsibility to construct (at least parts of) solutions to non-routine tasks for which they do not have a solution method in advance. When including software such as GeoGebra, students may need guidance with respect to earlier experiences of using the software (Gómez-Chacón and Kuzniak 2015). Task 1 (see Fig. 1) in this study was based on these principles. It was used in our earlier studies (Granberg 2016;Olsson 2018;Olsson and Granberg 2018) and was designed for users with limited experience with GeoGebra. It was shown that they could receive guidance with respect to how to enter algebraic expressions into GeoGebra and in how to use the angle-measuring tool combined with instructions to focus on m-values (the x-coefficient) and test the solution, but still have to construct parts of the solution themselves. Task 2 (see Fig. 2) was originally designed for a quantitative study comparing learning outcomes of Tasks 1 and 2 with respect to the characteristics of reasoning (Olsson and Granberg 2018). It was designed based on Lithner's (2017) principle for AR tasks, providing a task-solution method. Compared with Task 1, it offers far more specific guidance, including which functions to enter into GeoGebra, as well as instructions to look for perpendicular pairs of graphs and to focus on the product of gradients m 1 and m 2 . Task 2 was designed to encourage the Guided AR strategy of following written or oral guidance (Lithner 2008;Palm et al. 2011). In a previous study, tasks designed using the same principles were found to engage students in AR (Jonsson et al. 2014). In summary, the students' use of GeoGebra and its identified potential to support reasoning and problem solving was used to categorize how students differed in their use of the software and types of their articulated proofs arising from Tasks 1 and 2. The specification of GeoGebra's potential to support problem solving and reasoning in the context for this study is based on our experience from earlier studies (Granberg and Olsson 2015;Olsson 2018). Lithner's (2008) framework of creative and imitative reasoning was used to describe students' ways of reasoning, while Balacheff's (1988) types of proof were used to examine the variety of proofs the students produced for their solutions. Method This study is qualitative and was designed as an experiment to compare students' utilization of the specified potential of GeoGebra when solving non-routine tasks with two different task designs with respect to the degree of guidance. The study was undertaken in collaboration with a project group which is investigating learning mathematics in relation to imitative and creative reasoning. Forty students in school years 7-9 from two different Swedish compulsory schools (one from a major city, one from the countryside) volunteered to participate. Sweden has a compulsory national curriculum and linear functions are introduced in year 9. The students from year 9 in this study had been taught briefly about aspects of linear functions (although not the conditions under which the graphs of two linear functions will be perpendicular), while those from years 7 and 8 had not. Students from both schools had experiences of technology aids (including GeoGebra), but the major part of mathematics lessons were performed without the aid of technology. Written informed consent was obtained both from each student and from their parents, and all ethical requirements of the Swedish Research Council (SRC 2011) were met. First, students were randomly divided into two groups, one to solve Task 1 and the other to solve Task 2. Then they were paired randomly, but all were paired within the same year of school. After collecting data from six pairs of students each solving one of the tasks, a decision was taken to collect further data from eight pairs of students solving Task 2. The reason for that was that the solutions to Task 1 essentially did not deviate from experiences from our earlier studies, while the analysis of the solutions of Task 2 would benefit from an extended sample. The Two Non-routine Tasks The issue of the tasks is to formulate a rule for the conditions under which the graphs of two linear functions will be perpendicular to one another. There are two main sub-tasks to engage in: constructing the necessary examples and identifying the relationship between the corresponding m-values that result in perpendicular graphs. For Task 1, it was considered that students could use the Cartesian table (how to define positions and how to read a graph) in combination with the dual visualization in GeoGebra of algebraic and corresponding graphical representations, in order to construct examples of linear functions with perpendicular graphs. These constructions would serve as references for exploring the circumstances for m-values that result in perpendicular graphs. Due to the students' limited experience with linear functions, the instructions guided them to use the general formula y = mx + c. In pilot studies before earlier studies using Task 1 (Granberg 2016;Olsson 2018), it was shown that students with pre-knowledge similar to that of this study's participants did not engage in the task unless they had concrete instructions for how to start. Furthermore, Task 1 included instructions guiding students to focus on the m-values and to test the rule by constructing three examples. This may have limited the students' utilization of GeoGebra's potential, for example to create and manipulate lines in the graphic field. On the other hand, creating a graph by submitting an algebraic expression makes the construction less transparent than creating it using the graphing tool. This may promote deeper reasoning for understanding the connection between algebraic and graphic representations (Sedig and Sumner 2006). Tasks involving exploring properties of both m-and c-values and concluding which of them affect the slope are highly relevant in this analysis of using GeoGebra. Experiences from earlier studies have shown that students of ages similar to those in this study did engage in such tasks even with these instructions, while pilot studies have shown that without these instructions students often engaged in fruitless trial-and-error strategies. It appeared that they were not able to explore systematically either the m-and c-values without guidance. Therefore, the instructions were kept in this study. Task 2 was originally designed for a quantitative study investigating the learning outcome compared with Task 1 (Olsson and Granberg 2018). The design aimed at removing the incitements for CMR, which is to construct an original solution and formulate arguments for it. More precisely, students solving Task 2 in this study, if they were to follow the instructions, would be provided with the necessary examples and a table useful for identifying the circumstances under which relationships between m-values result in perpendicular graphs. Tasks similar to Task 2, with specific instructions and a prepared table, are common in Swedish textbooks. Solving tasks with guiding templates using AR means that students have less need to assess their methods. The arguments students use for their solutions to such tasks are often shallow or not even anchored in mathematics (Lithner 2008). However, if solvers of Task 2 were asked to explain their solution, they would have access to examples similar to those constructed by solvers of Task 1 and might draw on parts of the specified potential of GeoGebra. Procedure, Data Collection and Method of Analysis The study was performed in a separate room from the classroom and involved two students at a time. Students working in pairs had been used in earlier research (see, for example, Schoenfeld 1985;Roschelle and Teasley 1995), which had been found suitable to collect data about problem solving and reasoning. Each pair shared one laptop and the author gave a brief introduction to using GeoGebra (how to enter algebraic functions, how to adjust existing algebraic expressions, how to measure angles, etc.), as well as offering the students all of the technical support necessary for their use of the software while solving the task. The students could ask questions, but were not provided with any further guidelines for solving the task. If students doing Task 1 did not know how to proceed, they were encouraged to explain the way they were thinking; students undertaking Task 2 were encouraged to read the instructions. When students felt they had solved the task, they were asked, BAre you sure the rule works?^and BCan you explain why the rule works?^Their responses were used to examine the quality of their justifications and students could use GeoGebra to support them. The students' conversations and screen actions were recorded using the software BB-flashback. The movie files included both the task-solving session itself, as well as the students' answers to the concluding questions. During the task-solving session, the author took notes when students gestured during their conversations. The conversations were transcribed as written text, the computer actions presented within square brackets and gestures were noted within parentheses. The method of analysis builds on the components presented in the framework (specification of GeoGebra's potential, categorising of reasoning as CMR and AR, and categories of different types of proof). The tasks were designed in relation to principles intended to promote CMR (Task 1) or AR (Task 2). Based on earlier experience, it was expected that reasoning would be observed when students prepared inputs for GeoGebra, assessed outcomes and drew conclusions. Furthermore, students' reasoning in combination with using the software was regarded as a basis for analysis of the pertinence of GeoGebra's features. Therefore, in preparation for analysis, the transcripts were divided into sequences as shown below: & sub-task → formulating input → transforming input to output (processed by GeoGebra) → using output → drawing conclusion. Both Task 1 and Task 2 required the students have to solve several sub-tasks. In Task 2, there are explicit suggestions for sub-tasksfor instance, Benter y = 2x + 2^. Task 1 does not provide such explicit instructions; instead, the students have to formulate subtasks themselves. In order to monitor whether or not the design of the tasks resulted in the expected student approaches, hypothetical paths for Task 1 and Task 2 were created showing probable ways through the sequences. Figure 3 depicts hypothetical paths through such a sequence for (a) Task 2 and (b) Task 1. The hypothetical path for a Task 2 sequence is to adapt the sub-task from instructions (1a), enter the suggested input into GeoGebra (2a), then allow GeoGebra to process the input (3), then use the output as it is, merely copying the output as a written or oral answer to the sub-task (4a) and, finally, drawing a conclusion (5). For Task 1, the hypothetical path starts with formulating a sub-task without guidance (1b; this can be supported by a question, hypothesis or a less-structured exploration, e.g. BWhat happens if we change the x-coefficient?^) and constructing and submitting an input complying with the formulated sub-task (2b). GeoGebra processes the input (3), the output is interpreted, assessed or transformed (4b), and a conclusion is drawn (5). Since GeoGebra's potential to support reasoning was available to all students, it was of particular interest to examine possible deviations from the hypothetical path of Task 2. The design of Task 2 provides students with opportunities to follow instructions rather than creating solution methods, which is consistent with AR. Furthermore, it provides hints as to what is important, which removes the incitement to assess the outcome. However, students could neglect the proposed input actions and instead construct their own from the main question of the task (path 1a → 2b → 3 → 4b → 5). They could then follow the path for Task 1. Another possibility is that, after entering the proposed input object, they could interpret, assess or transform the output object (path 1a → 2a → 3 → 4b → 5) rather than just transfer it as a solution. Both these approaches entail constructing (parts of) the solution and/or assessing the outcome, which is consistent with CMR. All sequences do not necessarily include all stages. For instance, solvers of both tasks may notice that it is possible to use the existing GeoGebra output (3) to solve a sub-task (1a or 1b → 3 → 4a or 4b → 5) or they may solve a sub-task without using the software (1a or 1b → processing without GeoGebra → 5). The analysis was conducted through the following steps: 1) to structure the data, students' task-solving was categorized as following the hypothetical path for either Task 1 or Task 2following the hypothetical path was regarded as reflecting that the intention of either task design was achieved; 2) within these categories, students' task-solving activity was identified and categorized as either using or not using the specified potential of GeoGebra; 3) each student's activity was examined to determine whether the reasoning used was AR or CMR; 4) the type of the students' proof of their solutions was classified in realtion to Balacheff's categories. During the progression of the study, the method of analysis was recurrently discussed within the earlier mentioned project group. When the four steps above were set, extracts of transcripts were analysed on trial individually by members of the project group. The group achieved early consensus on steps 1, 3 and 4. We had some initially disagreements about step 2, as some members considered the first specification of GeoGebra's potential to support problem solving and reasoning too generally. Therefore, it was reformulated to connect more closely to the context of the study. The final analysis was conducted by the author. The first step was to examine whether the students who were given Task 1 followed or deviated from the hypothetical path for Task 1 and whether those solving Task 2 followed or deviated from the hypothetical path for Task 2. Then, to examine students' utilization of GeoGebra's potential, their task-solving activity was divided into sequences (see above). Such sequences typically began with preparing an input for GeoGebra and ended by drawing a conclusion from the software's output (Granberg and Olsson 2015). After structuring the data into sequences, the next step was to categorize the students' activity and determine how consistent the GeoGebra actions were with the specification of the utilization of GeoGebra's potential. The model in Fig. 3 was used to structure the data, because it was likely that there were substantial differences between the reasoning for and the utilizations of GeoGebra's potential in the task paths for the two tasks. Since the path for Task 1 required more of an inquiry-based approach than that for Task 2, it was possible to observe differences in whether and how students chose to use GeoGebra to explore and investigate mathematical properties by transforming and manipulating the mathematical objects. The reasoning part of the analysis focused on students' speech (suggestions, questions, answers, arguments, etc.) and actions (interaction with GeoGebra and gestures). Lithner's (2008) concepts of reasoning were used to establish whether their reasoning was AR, Guided AR or CMR. When students' utterances and actions were interpreted as merely following the instructions in Task 2, their reasoning was considered Guided AR. When students recalled a memorized procedure, they were categorized as using AR. Finally, students' reasoning was regarded as CMR if they created or re-created a solving strategy (not recalling a whole procedure or, for solvers of guided task, not following the instructions step-by-step), if they presented arguments for why the strategy would work, did work or did not work, and if their arguments were anchored in intrinsic mathematical properties. The students' proofs for their solutions and methods were investigated at the subtask level while they were solving the task and also when they believed they had formulated a rule, by asking them the question, BCan you explain why the rule works? ll proofs were classified based on one of Balacheff's four types. A proof solely based on the assumption that several examples confirmed their conjecture was categorized as 'naïve empiricism' (Type 1). If the question of generality was raised and their statement was supported by an explicit exploration that verified whether it worked in the identified instance, the proof was categorized as a 'crucial experiment' (Type 2). If the exploration instead investigated the properties and relations of the representations of linear functions and was used to explain why something was true, it was categorized as a 'generic example' (Type 3). The 'thought experiment' (Type 4) was reached when students detached their proof from specific examples, and stated and argued for properties and relations relevant to the conditions under which the graphs of any two linear functions would be perpendicular. Analysis After dividing transcripts into the sequences of sub-task → formulating input → transforming input to output (processed by GeoGebra) → using output → drawing conclusion, the sequences were compared with the established hypothetical paths (see Fig. 3). Three categories were identified: solvers of Task 1 following the hypothetical path for Task 1; solvers of Task 2 following the hypothetical path for Task 2; solvers of Task 2 deviating from the hypothetical path for Task 2. The last category contained three pairs of students. These pairs are interesting as their utilization of GeoGebra, reasoning and justification did not differ from other students solving Task 2 when they were following its hypothetical path. But, after deviating from the path for Task 2, they were more similar to students solving Task 1. The analysis of the Task 2 pairs who deviated from the hypothetical path will be presented in a separate sub-section, focusing on the reasons for, and the consequences of, their deviation. The results of the analysis are summarized in Tables 1 and 2, which provide an overview, before more detailed presentations of each analytical step. Categorization of Students' Paths Through Solutions Structuring data into sequences and comparing them with the hypothetical path for either task showed that students solving Task 1 essentially followed the hypothetical path for Task 1. However, solvers of Task 1 were also following the hypothetical path for Task 2 at the beginning. The following extract from the transcript of students O1 and O2 shows how they implemented the first steps towards solving the unguided task: 1. O1: Okay … let's start … just enter something … 2. O2: Well … y equals one x … and plus one will be great … This extract represents two sequences that contains parts of the hypothetical paths both for Task 1 and for Task 2. In turns 1-4, the students start by following the instructions for Task 1 and performing the sub-task of deciding what to enter; GeoGebra then transforms the input to a graph and the output is observed and found satisfactory. This sequence essentially follows the hypothetical path for Task 2they follow instructions, but do not interpret the output to draw a conclusion. The choice to submit y = 1x + 1 is unguided, however. Also, turn 5 in the sequence shown in turns 5-10 is an instruction from Task 1 (draw another linear function perpendicular to the first one). They construct the input themselves. Turns 6-10 then follow the hypothetical path for Task 1, with assessment on turn 9 and a conclusion on turn 10. All five pairs who successfully solved Task 1 initially approached the task by following the first instructions. Turn 10 shows the students have an idea of how to proceed. The following turns show them continuing to look for the solution: 11. O1: We want the line to decrease when x increases. 12. O2: Then maybe y equals minus x plus one? 13. O1: Yes … let's try. [Types in y = −x + 1 and enters it] … yes. 14. O2: Yes … [measures the angle] … it is ninety degrees. Turns 10 and 11 connect the sequences. The conclusion in one sequence underpins the formulation of a sub-task for the next sequence. This is typical of the five pairs who successfully solved Task 1. Often, several sequences were associated with the same sub-task. For example, when students T1 and T2 worked to find linear functions resulting in perpendicular graphs, they agreed to enter y = −5x + 2 as a starting point. After investigating the intersection of y at 2 and x at 0.4, they concluded that the graph decreased five steps on the y-axis for every step on the x-axis, and that a perpendicular graph should 'do the opposite', increase by five steps on the x-axis for every unit step on the y-axis. That was elaborated into the idea that the linear function for the graph perpendicular to y = −5x + 2 must have an x-coefficient 1/5. After stating this was true, they submitted y = 3x + 2 and y = −1/3x + 2, and y = −4x + 2 and y = 1/4x + 2. In this case, three sequences through the hypothetical path for Task 1 were associated with the same sub-task (find pairs of linear functions with perpendicular graphs). The work of students T1 and T2 and turns 11-14 from O1 and O2 follow sequences exemplifying the hypothetical path for Task 1. After following the first two instructions, all successful solvers of Task 1 created their own sub-tasks, formulated input, assessed, interpreted, and transformed the output, and drew conclusions. Therefore, the pairs who successfully solved Task 1 essentially followed the hypothetical path for Task 1, even though they initially only followed instructions. Fourteen pairs solved Task 2, eleven of which essentially followed its hypothetical path. The following extract is from students A1 and A2: This sequence follows the hypothetical path for Task 2. The sub-task is adapted from the instructions, they used the suggested input, GeoGebra transforms the algebraic representation into a graph, and the students used the output directly without further interpretation or assessment to write the x-coefficient in the prepared table (see Fig. 2). This was repeated until all x-coefficients were written into the prepared table. Then A1 and A2 proceeded as follows: This sequence also follows the hypothetical path for Task 2. Although they discuss the result, they do not assess or interpret the result that the x-coefficients of 0.5 and − 2 result in perpendicular graphs. The attention is rather on how to fill in the table correctly. After identifying all x-coefficients giving perpendicular graphs and writing them pairwise in the prepared table, they immediately started to multiply the m-values according to the instructions of Task 2 and formulated the following rule: 1. A1: The rule must be the x coefficients times each other … 2. A2: Must be minus one … 3. A1: Yeah … that's right. In this sequence the students did not use GeoGebra. Their conclusion was probably based on noticing that all multiplied examples in the filled-in table were − 1. However, turns 1-3 do not indicate any assessment or interpretation of the result. A1 and A2 essentially follow the hypothetical path for Task 1 and are representative of all eleven pairs who did not deviate from the instructions. That is, they read the instructions, entered the suggested functions into GeoGebra, used the information from GeoGebra to find the perpendicular graphs and fill in the table, followed the instructions to multiply the x-coefficients, and drew the conclusion that the rule must be m 1 x m 2 = −1. In the following sub-section, those who solved Task 1 or Task 2 following the hypothetical path are considered separately. Utilization of GeoGebra's Potential There were significant differences in GeoGebra use between pairs who solved Task 1 or Task 2 and followed the appropriate hypothetical path. Students Solving the Task 1 Following the Hypothetical Path for Task 1 Solvers of the Task 1 used GeoGebra's quick and exact transformation and display of representations to investigate specific mathematical interrelations and properties of representations of linear functions. Turns 1-5 shows S1's and S2's efforts to discover the values of the x-coefficient that result in a decreasing slope: 1. S2: Maybe y equal to minus two x will make it … 2. S1: We can try [writes and enters y = −2x − 1]. … No. … [erases y = −2x − 1] 3. S2: Maybe x must be less than minus two. 4. S1: We want y to decrease less while x increases more. 5. S2: Maybe y equal to minus x will work [writes and submits y = −x − 1]. … No. 6. S1: Let's try minus zero point five [write and submits y = −0.5x − 1]. … Yeah. Turns 2, 5 and 6 show how S1 and S2 utilize GeoGebra's feature to transform the algebraic input to graphic output easily. In the next extract, they compare the xcoefficients of two examples: 1. S1: Well, if we go from y equals one x plus one to y equals two x plus three, the xcoefficient increases to doubled, and [if we go] from y equals minus one x minus one to y equals minus zero point five x plus three, the x coefficient is divided by two. 2. S2: Then we can double the two and divide the minus point five by two. … Let's see. … y equals four x plus three and y equals minus zero point two five x plus three [writes and enters]. … Yeah. This shows the students using the displays of algebraic and graphic representations side by side to examine the properties of the x-coefficient and the relation between two xcoefficients when the graphs are perpendicular. They use the dual display to confirm that the solution on turn 2 results in perpendicular graphs. Characteristic solving patterns in Task 1 included constant use of the angle-measuring tool, hiding and revealing created representations, stepping back and forth through the solution, and adjusting existing representations. For example, T1 and T2 adjusted their existing representations when they thought it would be easier to relate the graphs to the axes if they removed the constant, and could thus place the intersection between graphs at (0, 0). This example is discussed further in the subsection headed BCategorization of reasoning^. Students Solving Task 2 Following the Hypothetical Path for Task 2 The eleven pairs following the hypothetical path for Task 2 typically used GeoGebra's potential only for the quick and exact transformation and display of representations and to recognize perpendicular graphs and connect them to the algebraic representations. Some used the angle-measuring tool, while others approximated which intersections were 90°. The following extract is from students H1 and H2: Turn 1 shows the use of quick and exact transformation. Turns 2-5 show the students focused on the display instead of on the instructions for recognizing the m-value. The display on the screen was also used to determine which graphs were perpendicular, as shown in the following extract: 1. H1: I think these two [pointing with the cursor at an intersection]. Wait [uses the measure tool and measures the angle as 90°]. Yes. None of the eleven pairs following the hypothetical path for Task 2 used GeoGebra's potential to explore properties of, and relations between, representations of linear functions. Categorization of Reasoning Students solving the Task 1 essentially engaged in CMR; that is, they constructed the solution method and supported it with arguments. The eleven pairs solving Task 2 following the hypothetical path for Task 2 were considered to engage in Guided AR; that is, they used the instructions as a solution method and their arguments for their solutions were shallow or non-existent. Students Solving Task 1 Following the Hypothetical Path for Task 1 Typically, students solving Task 1 discussed what to submit into GeoGebra. The following extract is from students T1 and T2, who created two example pairs: y = x + 2 with y = −x + 2 and y = x − 2 with y = −x − 2: 1)]. 4. T2: That is increasing one y for five x. Wait, that is one divided by five, zero point two. Turns 1 and 2 show the students' constructing part of the solution. They realize further references are needed to draw the conclusions necessary to formulate the rule. Turn 2 includes the statement that the constant is not necessary, which is supported by the argument on turn 3 that it does not affect the slope. Turn 3 includes a claim that the line should intersect at (5, 1), which is supported on turn 4 with the argument that y should equal 0.2, which would increase the line by one y for each five x. This is an example of CMR, including constructing a (part of) the solution and supporting it with arguments based on mathematics. The five pairs who successfully solved the unguided tasks showed many examples of engaging in CMR. Students Solving Task 1 Following the Hypothetical Path for Task 1 The most frequent type of reasoning in students solving Task 2 was Guided AR; that is, they followed the instructions, submitted them into GeoGebra, and accepted the output as part of the solutions without any supporting arguments (as in the example of students A1 and A2). Students E1 and E2 had the following discussion while searching for perpendicular graphs: Turns 1 and 2 include a claim about what is perpendicular, supported by another claim that the angle is 90°. This is not considered CMR; rather, it is shallow and does not support an original solution method. Turn 3 indicates that the purpose of the discussion on turns 1 and 2 is to procced in line with the instructions for the task. This includes guided AR, following instructions, and memorized reasoning, all typical for the ten pairs that reached a correct solution by following the hypothetical path for Task 2. Type of Proof There were substantial differences in justifications between students solving Task 1 or Task 2. Most (eleven out of fourteen pairs) solvers of Task 2 gave only Type 1 proofs for their solutions, while solvers of the unguided task usually gave Type 2 proofs. Answering the question BCan you explain why the rule works?^solvers of the guided task still only provided Type 1 proofs, while three pairs of the unguided students provided Type 3 proofs and one pair gave Type 3 and partly Type 4 proofs. Students Solving Task 1 Following the Hypothetical Path for Task 1 Students who solved the unguided task frequently articulated proof for their steps towards solution at the second Type l, the crucial experiment. The following extract from students R1 and R2 shows their reasoning about how to calculate m 2 when m 1 is known: 1. R1: I mean … look at these [pointing at y = 5x and y = −0.2x]. Point two times five is minus one … then minus one divided by five is minus zero point two. 2. R2: Yeah. If the first one is four, the other one must be minus one divided by four [types and enters y = 4x and y = −1/4x]. Yeah, now we know it's working. 3. R1: The rule must be … if one line is y equals mx plus c, the other must be y equals minus one divided by m plus c. Turns 1 and 2 indicate they also recognize the relationship between one step on the yaxis and the number of steps on the x-axis as a fraction. This indication is supported by their submission of the x-coefficient as −1/4 in turn 2. The claim in turn 1 is justified in turn 2 by the success of a single experiment, which may be an example of a proof on Type 2. But, in this moment, it is not clear whether the students have addressed the problem of generality. When asked BCan you explain why the rule works?^, students R1 and R2 return to their examples: 1. R1: Well … we had two examples and saw that if one graph increases more, the other must decrease less. 2. R2: Then we found that one m value could be calculated by dividing minus one by the other. 3. R1: That seems to decrease the other graph correctly … but we couldn't be sure… so we tried another example. This indicates that the example could be interpreted on either Type 1 or Type 2. But as they build their conclusion on a correct idea (turn 1) and a reasonable conjecture (turn 3), and explicitly express the problem of generalization (turn 3), it is regarded as being Type 2. Typically, when students were asked BCan you explain why the rule works?^, they could not answer directly. Instead, they returned to exploring the examples they had constructed while working with the task, and elaborated their proofs to Type 3, as exemplified by S1 and S2: 1. S1: If x is minus one, then y is here [pointing to (−1, 10)] … and then for y = 0.1, x is here [pointing to (10, 1)]. 2. S2: This distance [pointing to (10, 1); see Fig. 4] and this one [pointing to (−1, 10)] are equal. 3. S1: The difference is that this one goes up one y for every ten x, and this one, the opposite, up ten y for every minus one x … 4. S2: That is, both have moved one step from each axis [pointing at the distances from the x-and y-axes respectively]. It is like the axes kind of turn around and remain perpendicular. On turns 1 and 2, S1 and S2 recognize the similar distances from the y-and x-axes and justify them by referring to how the m-values affect the slopes to the axis. On turn 3, S1 explains why the distances are equal, which supports the claim on turn 4 as to why the graphs remain perpendicular, that the equations made them move equal distances from each axis. This may be an example of a Type 3 justification, the generic example. Instead of just stating Bit works^, they refer to how properties of the xcoefficient affect the distances to the x-and y-axes. The students do not explicitly articulate that the equal distances they refer to mean that the angles between the graphs and the y-axis complement each other to 90°, and it could be questioned whether this is an instance of a generic example; whether they actually go beyond simply stating that it works. However, with regard to the age of the students (15 years old) and the fact that they had constructed a couple of examples already before the extract, it is still regarded as an attempt to explain more than merely being content with finding a working method. Students Q1 and Q2 tried to develop their proof into Type 4, the thought experiment: 1. Q1: Well … if one m is four the other must be minus one over four … and five means minus one over five. Multiplied they [both sets] equal minus one. 2. Q2: Yeah, whatever the x coefficient is, the other is always minus one divided by the first. 3. Q1: That means they must always be minus one … that goes for all values. Turn 1 is still connected to concrete examples. Turn 2 generalizes further, while Turn 3 claims that the conclusion is true for all values. It appears that they have recognized the pattern and realize that with the rule they can create perpendicular graphs for any linear function. But they still do not answer to the why question. They are asked again: why does your rule work? 4. Q1: Well ... one must be positive and one negative ... if we start from m equals one and make m one let's say ten times bigger ... m two must be ten times smaller [it is assumed that Q1 means 1 divided by 10 with respect to the earlier parts of the task solving] ... if one has ten times a slope, the other must have an equally less slope. 5. Q2: Yes ... but one of them must be negative ... and you can take any number. Q1 returns to using an example, but Q2 claims the generality. With respect to the age of students Q1 and Q2 (both 14 years old), proving that the rule works in general is mathematically advanced, and the students do not explain it very deeply. In Sweden, this level of mathematics is normally taught to 17-18-year- Fig. 4 Arrows point to the distances from the x-and y-axes, indicating perpendicular lines olds and, at this age, it is reasonable to demand a more specific prooffor example, considering that the graphs of two linear functions with negative inverted m-values form uniform perpendicular triangles parallel to the axis, which may be used to prove that the angle between the graphs must be 90°. For the younger students in this study, however, it was regarded as sufficient that they, with the aid of the visualization on the screen, articulated less specific but nonetheless correct reasons for the generality. However, even though Q1 and Q2 in some sense develop their proof into some generality, it is not fully in Type 4, which for the ages of the students in this study is rare (see Balacheff 1988). Students Solving Task 2 Following the Hypothetical Path for Task 2 No student following the hypothetical path for Task 2 articulated any proof while working towards the solution. When asked BCan you explain why the rule works?^, no pair solving Task 2 offered any proof other than Type 1, as exemplified by student A1 and A2: In both these extracts, the researcher encouraged students to elaborate on their answers, but there were no examples of students developing their proofs beyond Type 1. Students Deviating from the Hypothetical Path for Task 2 Students N1 and N2 deviated from the beginning and followed the hypothetical path for Task 1 through to their solution. Students M1 and M2 followed the instructions, but failed to submit the suggested linear functions correctly and could not formulate a rule. After abandoning the instructions, they solved the task in a way similar to the usual solutions to Task 1. Students L1 and L2 started by following the instructions, but unlike other solvers of Task 2, completed the sub-tasks by investigating the properties of, and relations between, representations of linear functions. This is exemplified in the following extract: 1. L1: Perpendicular to y equals minus x plus three is … it must be just x … or … 2. L2: I think so … 3. L1: Then if you do one like this [pointing to the first quadrant … then you must do another down here [pointing to the second quadrant] … that would be y equals one x … and y equals minus one x… The conversation shows that they do not merely accept that the graphs are perpendicular but try to justify why. A bit further in the solution they read the instructions: Unlike other solvers of Task 2, students L1 and L2 do not just multiply the xcoefficients, but instead try to understand what that means for formulating the rule. After formulating a rule (first m 2 = −1/m 1 , which was revised into m 1 × m 2 = −1), they created further examples to support their proof of their rule. Discussion and Conclusions Guided by the research question, BWhat different methods, if any, do students use to utilize the potential of GeoGebra to support problem solving and reasoning when working with non-routine tasks with different characters of guidance^, this study showed important differences between solvers of Task 1 and Task 2 in utilizing the potential of GeoGebra. Beyond the most expected differences (e.g. most students who solved Task 2 followed the instructions and they who solved Task 1 had a more complex path to the solution), it is apparent that students who solved Task 1 to a larger extent also explored properties of and relations between different representation of linear functions. Looking into detail reveals that Task 1 students more often posed questions and formulated hypotheses. In turn, they engaged with GeoGebra for the purpose of answering questions and verifying/falsifying hypotheses. These actions may also explain differences in the utilization of tools. Task 2 students (who followed instructions) were mainly using the 'angle-measuring tool' and, in some cases, the 'hide-and-reveal feature' to answer the questions given by instructions. Task 1 students used more tools, often for the purpose of examining properties of linear functionsfor example the significance of the xcoefficient and constant term. When it comes to articulating a proof for the solution, there is a difference between Task 1 and Task 2 solvers in that Task 2 students referred to the results (the table) of following instructions, while Task 1 students often referred to the process towards the solution, creating new actions to support their claims further. An observation is that it seems like students who solved Task 1 utilized the potential of GeoGebra because they needed to understand properties of linear functions to solve the task. Students who solved Task 2 and utilized the potential of GeoGebra extensively were doing so either if they had made mistakes preventing them from solving the task (students M1 and M2), or if they chose to do so (students L1 and L2, and N1 and N2). That means that Task 2 could be solved without utilizing the potential of GeoGebra and, in this study, most students chose not to. Intention and Outcome of Task 2 Reasonable arguments in favour of the design of Task 2 include preventing students from engaging in fruitless actions that do not bring them any closer to a solution to the task. Guidelines assist them in creating mathematical objects appropriate to support the solution and questions direct them to focus on what is important for creating the solution. Task 2 in this study provided the students with multiple examples of successfully constructed objects and perpendicular graphs, and invited them to explore the relationships between the graphical and algebraic representations of linear functions. Such circumstances have been suggested as beneficial for reasoning (Falcade et al. 2007), problem solving (Santos-Trigo et al. 2016) and developing an understanding of functions (Vinner and Dreyfus 1989). However, in this study, even though Task 2 students managed to create the rule, they did not utilize GeoGebra's potential to support either creative reasoning (Lithner 2008) or problem solving. On the contrary, these students mainly used GeoGebra to submit the suggested formulas and to measure angles. The students' choice not to utilize the explorative potential of GeoGebra could be explained by their having no reason not to trust that the suggested actions were appropriate for solving the task. When the mathematical objects created by the suggested actions were sufficient to solve the task, students may have seen no reason to investigate them further. By extension, this means that they never developed systematic procedures with certain purposes, which Leung (2011) characterizes as establishing Practicing Mode (PM). Furthermore, as the following of instructions will generate all the necessary information, there is no incentive to enter Critical Discernment Mode (CDM). Even though they had some issues to consider, they never had to make a critical judgement regarding how to implement the tools to obtain correct information. Finally, formulating the rule for when two linear functions have perpendicular corresponding graphs was unproblematic, which meant there were no actions like making a generalizing conjecture or reasoning to prove or explain. Thus, solvers of Task 2 did not enter Leung's third mode, Situated Discourse Mode (SDM). Although research often addresses features of dynamic software as supportive of problem solving and reasoning, these results show that, in order to encourage students to utilize the potential of GeoGebra, it is not efficient to provide students with detailed instructions for creating supportive mathematical objects. Task design does seem to be important in encouraging students to utilize the potential of the software. The results indicate that students solving a task with specific guidance are unlikely to use the explorative potential that research has advocated as beneficial, and this may negatively affect students' learning. Less Specific Guidance Enhances Utilization of GeoGebra The results of this study indicate that the design of Task 1 encourages students to utilize the defined potential of GeoGebra. Given general instructions about which mathematical objects to construct, students must consider which specific objects to create and whether they will contribute to the solution. In this study, students repeatedly created graphs by submitting algebraic representations of linear functions into GeoGebra and they interpreted the outcomes by investigating properties of, and relations between, algebraic and graphic representations. This illustrates what the literature has suggested as the benefit of using dynamic software in mathematics education (e.g. Hohenwarter and Jones 2007;Preiner 2008) and for learning functions (Carlson 1998;Vinner and Dreyfus 1989). It is also an example of benefitting from GeoGebra's ability to display multiple representations, which is suggested as beneficial for learning functions (Preiner 2008). In this study, the design of Task 1 (guidance in how to engage in the task, but no instructions for how to create the solution) was shown to engage students in CMR, so that beyond constructing the solution, they created arguments to explain why their solution solved the task. That probably means that they learnt better through understanding their solution better. An important point is that the students who solved Task 1 had (see the BMethod^Section) access to instructions showing how to access GeoGebra. This did not seem to affect negatively their utilization of GeoGebra's potential to support problem solving and reasoning. This would indicate that it is important to distinguish between instructions regarding how GeoGebra's tools are used and instructions about how the task can be solved. The distinction between instructions for using the tools and for solving the task may be important with respect to establishing PM. Without exception, after a few attempts the students who solved Task 1 started creating linear functions with certain purposes. As they created the functions with corresponding graphs on their own, they needed to create and adjust the references to be suitable for observing and understanding patterns supporting the solution. This resonates well with CDM. With respect to the third of Leung's modes, SDM, solvers of Task 1 certainly had incentives to explain and formulate proofs, but it appears that they largely relied on the visual information provided by GeoGebra. They did not formulate any deeper arguments unless they were encouraged to do so by the author, although there were some attempts to use software tools to create and adjust references for deeper explanations (e.g. students T1 and T2 removed the constant term to have a clearer view of the graphs). However, there is room for development of the instructions for Task 1 here. Encouragement could be included in the written instructions; but, likely more important, it should be the norm always to have students explain solutions. These aspects are issues for further investigation. The Aspect of Proof Neither Task 1 or Task 2 included explicit instructions to the students to formulate a proof for their solutions. However, there were significant differences in the ways students articulated proofs for their solutions, both during the implementation of the task and when they answered the follow-up question. An explanation for these differences could be that students solving Task 1 needed to convince themselves that their solution would work for every sub-task. For example, when calculating the corresponding x-coefficient to 3 as −1/3 in perpendicular graphs, they needed to verify their solution by investigating whether the x-coefficients 4 and − 1/4 also will result in perpendicular graphs. Students who had convinced themselves that the solution worked might be more prepared to explain it on an intellectual level and to consider the mathematics behind its construction (Mariotti 2000). That is, they may have chosen to explain why it worked through investigating the properties of and relations between the mathematical representations they had constructed, which might indicate a deeper understanding of the solution. Students who solved Task 2, on the other hand, could merely trust the instructions, follow the guidelines, and reach an answer with no need to assess any of their work. In this study, when answering why their rule works, solvers of Task 2 articulated proofs for their solutions of Type 1 (naïve empiricism) which may indicate that their understanding of the constructed rule was rather superficial. Furthermore, even though Task 2 required the students to come to a conclusion about the rule for when graphical representations of linear functions are perpendicular, their approach to the task could be interpreted as procedural, which has been found to be ineffective for developing deep understanding (Oehrtman et al. 2008). This study contributes to the field by showing that it is not sufficient to provide students with dynamic software to promote their engagement in exploring and investigating mathematical objects. It is also important to consider the task design. In school, students are often given tasks with specific guidance because they are thought to be easier with which to engage. However, this study shows how and why guidance included in the design tasks to be solved using dynamic software must not remove the incitements to construct parts of the solution and to formulate arguments and justifications. Thus, it is reasonable to assume that such tasks could lead to a better understanding, which may also indicate better learning outcomes.
v3-fos-license
2019-03-17T13:12:31.705Z
2018-04-16T00:00:00.000
80408283
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "http://biomedres.us/pdfs/BJSTR.MS.ID.000964.pdf", "pdf_hash": "883acfd1e6646cef5ab50d1e00aadda16db26ee8", "pdf_src": "MergedPDFExtraction", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:129", "s2fieldsofstudy": [ "Medicine" ], "sha1": "79ba4baff3f94caf734e9a61e69c5ffb4297d2e9", "year": 2018 }
pes2o/s2orc
Adherence to medications among Nepali hypertensive population: A hospital-based cross-sectional study Background: Hypertension is a chronic medical condition, which needs an adequate management. The patients’ adherence with a proper medication of hypertension is the key factor to control hypertension and to reduce the associated complications. However, poor adherence to the antihypertensive drugs is a worldwide problem, which is responsible for adverse health outcomes and thus, increases different health complexities along with health management costs. Here we aim to identify the associated factors of medication adherence among the hypertensive patients in Nepal. Methods: We conducted a hospital-based cross-sectional study in Nepal from September 2016 to November 2016. We collected data of 260 hypertensive patients who are in medication of hypertension for at least 1 year. Data was collected by interview method and the medication adherence was measured by the three questions that were developed following the Morisky-Green test (MG) and Wong et al. Results: Good adherence with medication was observed in 51.9% of the 260-hypertensive patients. Female patients (57%) were found to be more likely to adhere to their medication regime, compared to their male counterparts (47%). Multivariate logistic regression indicates that married patients were twice as likely (95% CI: 1.01–4.46; P < 0.05) to adhere, compared to single or divorced patients. It also appeared that the patients who were taking medication for more than 10 years are 54% less likely to be compliant compared to the hypertensive patients who were on medication less than 10 years (OR=0.46). Availability of the antihypertensive drugs near the house (within 1 km) is important to reduce the poor adherence (OR=3.10, CI: 1.68-5.87). Conclusion: Adherence levels were higher in married hypertensive patients who are having medication for less than 10 years. Availability of the hypertensive medication near the house (<1 km) is also important to reduce the non-adherence on hypertensive medication. *Correspondence to: Ahmed Hossain, Department of Public Health, North South University, Dhaka, Bangladesh, Tel: + (880) 172-937-8236, Fax: + (8802) 5566-8202; E-mail: [email protected] Introduction Control of blood pressure is a continuous approach to prevent the occurrence of any adverse health outcomes like coronary heart disease, heart failure, stroke and premature death [1]. Though safe and effective drugs are available, the management of hypertension is still far from optimal which is mostly reasoned for the poor medication adherence [2]. Medication adherence can be characterized as how much the patient complies with the medication, dietary and behavioral changes. Poor adherence is especially common among the hypertensive patients. Existing research shows that in several European countries less than 60 % of treated hypertensive subjects have an adherence to their treatment over 80% [3,4]. In surveys, poor-adherence was also found 53.4% in Malaysia [2] and 41% in Bangladesh among the hypertensive patients [5]. Patients' adherence to the prescribed antihypertensive medication is an important factor in achieving blood pressure targets. No matter how efficaciously the clinician communicates the benefits of antihypertensive therapy, patients are responsible for taking their medications. Poor adherence to antihypertensive medication is likely to be a major contributor to medication failure, which may result in more visits to healthcare professionals, additional medication switches, dose escalations, and even hospitalization [6,7]. It is estimated that approximately 50% of patients in a general hypertensive population do not take their antihypertensive medication as prescribed [8,9]. Therefore, improving adherence to prescribed drug regimens in this population remains a major challenge to the treating physician. Non-communicable diseases are sweeping with an increasing trend in Nepal and hypertension is one of the most prevalent non-communicable diseases [10]. Till now, we have not found any study has been conducted to identify the associated factors of non-adherence with the anti-hypertensive medication in Nepal. But numerous epidemiological and prospective studies were conducted in different countries, which have demonstrated that hypertension which remains untreated for many years or is unsuccessfully treated for reasons such as poor adherence and adherence of the patient may lead to severe health problems like stroke, cognitive impairment etc. Therefore, it is important to find the factors associated with the poor-adherence in anti-hypertensive medication. The study will help to take necessary steps to the management of hypertension with the method of the medication. The data The study was a hospital-based cross-sectional survey, and we collected data from the Kathmandu University Dhulikhel Hospital of Nepal, which is located 30 km away from Kathmandu Valley (Capital city of Nepal). From the history of patient's profile, we found 260 hypertensive patients, who were suffering for at least six months, and were prescribed for medications to control hypertension. All the participants gave their informed consent to participate in the study and were informed of their right to withdraw from it at any time. Adherence with medication Wong et al. portrayed that the medication consistency when medical or health advice corresponds with the individual's conduct as the utilization of medication prescribed changes in the way of life, and participation in therapeutic arrangements [11]. The Morisky et al. described the treatment compliance as far as four things coded as "yes" or "no" [12]. These things identify with not taking medications because of carelessness, forgetfulness, feeling better, or feeling worse. Morisky et al. [12] emphasized on forgetting taking medication to define the medication adherence, which can be improved by a reminder from family members. We measured the adherence by asking three questions in a self-reported adherence test following the Morisky-Green test (MG) and Wong et al. Hossain and Mithila also included these three questions in their study [5]. The three questions include: (1) whether the patient continues with the medicine, (2) whether the patient continues with regular clinic attendance, and (3) whether the patient get social support from family members or friends who were concerned about the respondent's hypertension or who were helpful in reminding the respondent about taking medication [11,12]. The first question on medication adherence is categorized as: "adherence: yes" (where the respondent 'never misses' or 'rarely misses' to take his/her medication doses); and "adherence: no" (where the respondent 'regularly' or 'fairly regularly' misses to take his/her medication). All the questions counted as score 1. Among these three queries, we defined the adherence with medication as "yes" if an individual is found with a total score of two including a positive response to the first question. Independent variables A semi-structured questionnaire was designed to obtain patient's information on sex, age, height, weight, schooling, marital status, occupation (day-labor, employed, unemployed), number of family members (≤ 4, >4), average monthly family income (<50000 Nepalese rupee or ≥ 50000), living house (own or rent), living area (urban or rural). Questions on other health related problems (yes or no), family history of hypertension (yes or no), duration of hypertension, smoking habit (yes or no), duration of high blood pressure, number of antihypertensive drugs, exercise habit, average daily sleep duration and availability of drugs near to house (within 1 km) were also included. Age was calculated in years from the date of births, and body mass index (BMI) was calculated from the height and weight. The BMI was categorized as normal (BMI: 18.5-24.9), overweight (BMI: 25-29.9) and obese (BMI ≥ 30). The duration of anti-hypertensive medication was categorized as ≤ 1 years, 2-3 years and ≥ 4 years. Sleep duration was assessed by the question, "How many hours each day do you spend sleeping?" Sleep duration was categorized as short sleep duration (<6 h) and normal sleep duration (≥ 6 h). Statistical analysis We analyzed the data using software R. Descriptive statistics were calculated for all of the variables, including continuous variables (presented as boxplots) and categorical variables (presented as frequencies). We did the chi-square test to evaluate the unadjusted association between each of categorical variables and medication adherence. Multivariate logistic regression analysis was conducted to get the odds ratios of factors with medication adherence. The results were reported by odds ratios (ORs) and corresponded 95 % confidence intervals (CIs). P-values less than 0.05 were considered statistically significant. Ethical approval Ethical approval for the study protocol was obtained from the North South University Review Committee, Dhaka (NSU-PBH-EA-1032) and Institutional Review Committee of Kathmandu University School of Medical Sciences, Dhulikhel Hospital and written informed consents were obtained from all the participants. Results Among 260 hypertensive patients, there were 132 male and 128 female patients included in the study. The baseline characteristics of the participants, such as sex, age, marital status, schooling, occupation, average monthly income, living area, smoking history, alcohol consumption, duration of anti-hypertensive medication, presence of other chronic complications, number of drugs used for hypertension, duration of sleep, exercise, availability of hypertensive drugs near to house are described in Table 1. It appears that female (57%) had higher prevalence of adherence with medication compared to male (47%), though the difference is not significant at 5% significance level. The patients who live in rented house and who belong to the poor family monthly income group (less than 50000 Nepalese rupee) are found more compliant with the antihypertensive medication. Besides, the patients who were above 60 years of age are more noncompliant compared to the patients who were in less than 50 years and 50 to 59 years old age groups ( Table 2). We fit a multivariable logistic regression model with medication adherence levels after adjusting all the factors. The adjusted odds ratios (ORs) including the confidence intervals are given in Table 2. It appears that male patients are 40% less likely to be adherent to the antihypertensive medication compared to the female group (OR=0.60, CI=0.33-1.10). In addition, we found married patients were twice more likely to be adherent compared to the divorced or widowed patients (OR=2.09, CI=1.01-4.46). The p-value of marital status is found to be 0.05, which indicates it is a significant variable at 5% significance level. It appears that employed patients had poor adherence to the anti-hypertensive medication compared to the unemployed patients (OR=1.96, CI=0.80-4.90). It is to remember that most unemployed patients were housewives and therefore most housewives were less likely to take antihypertensive medications. The patients, who were living in the rent house were 2.60 times more likely to have adherence with medication compared to the patients who lived in own house (OR= 2.60, CI=1.11-6.34). The patients who were having the antihypertensive medication more than 10 years were 54% less likely to have adherence compared to the new patients of 1-5 years (OR=0.46, CI=0.18-1.11). It also appears that the availability of the antihypertensive medication with 1 km is important for the patients. The patients who had the drugstore near the house (within 1 km) were 3.10 times more likely to have good adherence with the medications (OR=3.10, CI=1.68-5.87). Discussion Medication adherence is an important characteristic for hypertensive patients to control the blood pressure. This study found a very high percentage of poor adherence (52%) among the Nepalese hypertensive patients. This means that for many hypertensive patients, medication adherence needs to be improved. This result is close to what has been reported from Malaysia (53.4%) [2], Taiwan (47.5%) [13] and Bangladesh (41%) [5]. In this study, divorced or widowed patients who were living in an owned house, having the antihypertensive medications for more than 10 years and the availably of the drugs was not within 1 km have a statistically significant association with poor adherence of medication (P<0.05) in multivariate logistic regression. The marital status was also found as an associated factor from the study in Bangladesh [5]. We didn't find age as a significant variable but the odds ratio suggests that the patients with the age-group between 50 and 59 years are 1.50 times more likely to have good adherence with the medication compared to the younger age group (<50 Years). The results are found consistent from the study in United Kingdom [15]. It appears that patients have good adherence, as the patients get older. It can be noticed that poor adherence was not influenced by sex or by the schooling. But it appeared that male patients had poor adherence compared to the female patients. We have not found any behavioral factors that have any association with the poor adherence of the antihypertensive medications. It was also demonstrated in this study that availability of the antihypertensive medications is important to reduce the prevalence of poor adherence. Conclusion More than half of the hypertensive patients were found to have poor adherence, which negatively affect blood pressure control. This means that for many hypertensive patients, medication adherence needs to be improved. Developing intervention programs to address some of the factors identified is necessary to improve adherence and, in turn, to improve blood pressure control. These findings may be used to identify the subset of population at risk of poor adherence who should be targeted for interventions to achieve better blood pressure control and hence prevent complications.
v3-fos-license
2022-12-02T14:13:01.213Z
2022-01-01T00:00:00.000
254142801
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-021-09964-2.pdf", "pdf_hash": "83f360992066d9fbe0c87aca8a70319336d8c91f", "pdf_src": "SpringerNature", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:130", "s2fieldsofstudy": [ "Physics" ], "sha1": "83f360992066d9fbe0c87aca8a70319336d8c91f", "year": 2022 }
pes2o/s2orc
Hamiltonian formulation of higher rank symmetric gauge theories Recent discussions of fractons have evolved around higher rank symmetric gauge theories with emphasis on the role of Gauss constraints. This has prompted the present study where a detailed hamiltonian analysis of such theories is presented. Besides a general treatment, the traceless scalar charge theory is considered in details. A new form for the action is given which, in 2+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2+1$$\end{document} dimensions, yields area preserving diffeomorphisms. Investigation of global symmetries reveals that this diffeomorphism invariance induces a noncommuting charge algebra that gets exactly mapped to the algebra of coordinates in the lowest Landau level problem. Connections of this charge algebra to noncommutative fluid dynamics and magnetohydrodynamics are shown. Introduction Higher rank antisymmetric tensor gauge theories, also called p-form theories, is an old subject with many ramifications. Recently, another class of higher rank tensor gauge theories have come in the spotlight but, contrary to the p-form theories, the gauge fields here are symmetric in the tensor indices [1][2][3][4]. Their study has led to the discovery and understanding of a new class of topological matter called 'fractons', whose properties and applications have been analysed extensively [5][6][7][8][9][10][11]. A striking feature is the constrained mobility of these fractons. The excitations are either immobile or, they move in subdimensional spaces; i.e. spaces of dimensionality lower than in which they were formulated. On the other hand, composites of elementary fractons may move freely. These features are a consequence of the existence of nonstandard conservation laws. Contrary to the usual Gauss constraint in Maxwell theory that involves a single derivative of the electric field, here it involves two or more derivatives depending on the rank of the tensor gauge theory. The higher derivative Gauss constraint leads to the conservation of the electric a e-mail: [email protected] (corresponding author) charge (which is the usual conservation law) along with that of the dipole moment and other higher moments of charge distribution. Conservation of the dipole moment now renders a single charge stationary although it allows for partial motion of dipoles. Similarly, other conservation laws have other implications. It is clear that the Gauss constraint plays a pivotal role in the physical understanding of fractonic excitations. Also, it is crucial in the construction of such higher rank theories. The presence of higher derivatives gives considerable leeway in the construction of electric and magnetic fields, the associated Gauss constraint and the higher rank gauge theory itself [1,2]. The general approach is to postulate the basic symplectic structure and the transformation laws from which the Gauss constraint is guessed. From these results, the lagrangian is obtained by inspection [12]. However, it is possible to have a different symplectic structure with a different Gauss constraint to generate identical transformations. In other words, it is desirable, if not essential, to have a structured algorithm from which the various expressions follow. Motivated by these possibilities we look for a systematic hamiltonian formulation of higher rank symmetric tensor gauge theories. An overview of the traceless scalar charge theory, which is a prototype of these higher rank theories, is provided in Sect. 2 that illuminates some of the problems and caveats. We next discuss, in Sect. 3, the scalar charge theory, pointing out the differences and similarities with the earlier example. Here (Sect. 3.1) the matter sector is considered in some details, analysing the various global symmetries that yield the fractonic conservation laws. Especially, the results of Seiberg [4] on global symmetries are extended to include another conservation law besides charge and dipole moment. In Sect. 4 a new lagrangian is suggested that governs the dynamics of a traceless scalar charge theory. This is a mixed system containing both first and second class constraints. While the second class set is eliminated by computing the relevant Dirac brackets, the first class constraints are used to define the gauge generator. Eventually the first class constraints are also eliminated by a suitable gauge choice that may be interpreted as the analogue of the radiation gauge in usual gauge theories. The resulting gauge fixed brackets are compared with the radiation gauge brackets in Maxwell theory. So far the analysis was in any dimensions. We specialise to (2 + 1) dimensions in Sect. 5. A specific change of variables is done that solves the traceless constraint A ii = 0 by expressing the original field A i j in terms of a traceful field, introducing a length scale. A complete hamiltonian analysis reveals the structure of constraints and the gauge generator. It is found to generate area preserving diffeomorphisms in linearised gravity. As a consequence of this symmetry, elaborated in Sect. 5.1, the charge algebra is found to be noncommuting. In Sect. 6 we show that the area preserving diffeomorphism is exactly mapped to the lowest Landau level problem. The length scale introduced in the change of variables mentioned earlier is identified with the constant magnetic field of the Landau problem. Further, we also show in Sect. 6.1 that the charge algebra is identical to that found in noncommutative fluid dynamics or in magnetohydrodynamics in the presence of a strong magnetic field, examples that mimic the physics of the lowest Landau level problem. Conclusions are given in Sect. 7. Overview of the traceless scalar charge theory We give here an overview of the traceless scalar charge theory [1,2], basically for a couple of reasons. First, it is a prototype of higher rank tensor gauge theories recently considered in the literature which serves to highlight some of the caveats in the theoretical analysis. Secondly, it is this particular example that will be treated exhaustively in our paper. These higher rank tensor gauge theories are usually constructed in analogy with the Maxwell theory. The traceless scalar charge theory, for instance, is defined by a gauge potential A i j , which is a symmetric rank 2 tensor A i j = A ji , and its conjugate momenta which is the electric field, E i j , that is also symmetric. 1 These variables satisfy the usual Poisson algebra, The Gauss constraint is an analogue of the usual Maxwell one with an extra derivative but there is another constraint, the tracelessness of the electric field, which has no analogue in the Maxwell theory. These are given by, where ρ is the charge. The constraints are implemented weakly in the sense of Dirac [13]. This means they cannot be put directly inside the brackets (1) (in that case there is a contradiction as may be easily checked from the last relation in (1)), but only after the complete algebra has been computed. These constraints lead to the weak conservation of the charges, While the first one yields the usual charge conservation, the second implies that a charge is immobile while the last indicates that a dipole can move only normally to the dipole moment. Incidentally the last one is a consequence of the tracelessness of the electric field. But, as we shall show, this is not essential and may be obtained even if this condition does not hold. It is seen that the constraints (2) generate the following gauge transformations on the potentials, where α, β are the gauge parameters. One possible way to construct a lagrangian invariant under the above transformations is to write a first order form by inspection that would generate the above constraints and from which the hamiltonian is easily read-off. This is the approach adopted in [12] where the lagrangian is given by, where B i j is the magnetic field defined in three spatial dimensions as, which is also symmetric and traceless, thereby rendering it gauge invariant, The constraints (2) (for the source free case) are now implemented by the multipliers φ and θ while the positive definite terms involving the electric and magnetic fields are identified with the hamiltonian. The equation for E i j defines the electric field, Contrary to the magnetic field the electric field in not manifestly traceless although it is manifestly symmetric. This lack of tracelessness signals a possible caveat in the formulation. Indeed, the imposition of the constraints (2) on (8) yields certain relations among the variables (A i j ) and the multipliers (φ, θ ), the implications of which are not clear and have not been discussed. An alternative way to handle the problem was suggested in [11] where the second constraint in (2) was taken along with a new constraint, A ii ≈ 0. It is now possible to strongly implement the constraints so that E ii = A ii = 0 by using Dirac brackets instead of Poisson brackets. This is, however, a purely algebraic manipulation that does not illuminate the dynamical origin of the constraint (2). The A−E bracket now gets modified. One can compute this by using the usual Dirac procedure. However, in this case it can be readily derived by noting that the constraints are algebraic (there are no differential operators) so that any correction to the Poisson bracket must be algebraic. Recalling the symmetric nature of both A i j and E i j we find, The correction term to the Poisson bracket (1) emerges since the result must be zero for the choices i = j and k = l so that A ii = 0 and E kk = 0 are valid. The above modified bracket (denoted by a star) is the relevant Dirac bracket. After this partial gauge fixing there survives only the higher derivative Gauss constraint, given by the first relation in (2). In view of the algebra (9), the Gauss constraint (2) generates the gauge transformation, It is useful to make a simple consistency check. Since A ii = 0 (that was the gauge fixing condition), its variation must vanish; i.e. δ A ii = 0. This holds, as may be easily seen from (10). A gauge invariant lagrangian was also suggested, based on inspection. The electric and magnetic fields were introduced as, (11) and the lagrangian was written as, However it may be verified easily that this lagrangian 2 neither yields the Gauss constraints (2) nor does it yield the symplectic structure leading to the Dirac brackets (9). This is explicitly shown at the end of Sect. 3, below (25) onward. It is thus seen that simply by postulating a set of Gauss constraints might not lead to the construction of a consistent and viable action formulation. Indeed by just giving the Gauss constraints and the gauge transformations, we are already admitting to a specific symplectic structure which the action should reveal. But this need not be achieved as we just saw. The role of the constraint E ii ≈ 0, which is algebraic in nature, is also unclear. It does not appear from an analysis of the lagrangian (12) and, where it appears (see (5), it leads to ambiguities. It is thus necessary to develop a systematic formulation where these and other issues are clarified. This is the object of the next section. We conclude this section by commenting on the structure of the lagrangian (12). Because of the higher derivative nature the form of the electric and magnetic fields is not unique. It is of course possible to carry out the hamiltonian analysis once their explicit forms are known. The constraint structure would change leading to different transformation laws brought about by a change in the Gauss law. The scalar charge theory The higher derivative theories introduced in the previous section were motivated by the Maxwell theory. The principal difference from the standard Maxwell theory is the presence of higher derivatives which gives considerable freedom in defining gauge invariant electric and magnetic fields and hence in the construction of the lagrangian itself. In this section we present a detailed analysis of such a theory-the scalar charge theory. We first discuss the free theory and later consider the implications of coupling with sources. The lagrangian is defined, exactly in analogy with the Maxwell theory, by, where the electric and magnetic fields have been introduced in (11). The canonical momenta are given by, with the over-dot indicating differentiation with respect to time. Since it involves time derivatives, only π i j is a genuine momenta. The other one is a constraint, a primary constraint, canonical hamiltonian as, where λ is a multiplier enforcing the primary constraint. The Poisson algebra among the basic variables is given by (1) and, Time conserving the primary constraint { 1 , H T } ≈ 0 immediately yields the secondary constraint, which is the Gauss constraint, No further constraints are generated by this iterative process since { 2 , H T } = 0. As the constraints are involutive, these are first class. The system is thus a clean example of a gauge theory. The gauge generator, following Dirac's conjecture, is a linear combination of all first class constraints of the theory. Thus, it is given by, where α 1 , α 2 are the gauge parameters. However these are not independent. The number of independent parameters is given by the number of independent primary first class constraints, which is one in this case. There is a set of equations from which the relation between the parameters can be obtained [14,15]. In this case, however, we find this by an alternative method. The gauge generator generates the following transformations on the fields, with the last relation merely showing the gauge invariance of the electric field. We now take the variation of the fields appearing on either side of the first equation in (11), using (21). We find, from which we immediately obtain, Renaming α 2 as α, we obtain the following transformations under which (13) is invariant, Although the transformation for A i j reproduces the result (10), there are crucial differences. The structure of the Gauss constraint in the two cases (2), (18) is distinct and so is the algebra among the basic variables. The two differences cancel to yield the same result. This reinforces the necessity to carry out a systematic analysis by starting from a specific lagrangian instead of simply postulating certain transformations. In this presentation the gauge choice A ii ≈ 0 cannot even be done, let alone reproducing the result (9). This is because A ii is gauge invariant, having a vanishing algebra with the Gauss constraint (18), This is the physical reason. Algebraically, the matrix formed by the Poisson brackets involving the complete set of constraints -the Gauss constraint and the gauge conditionbecomes noninvertible so that the Dirac brackets cannot be defined. The same conclusion holds if we started from (12) instead of (13) (see footnote 3). It is actually possible to prove that, in this theory, there is no gauge choice that yields the symplectic structure (9). This quite general statement further bolsters the observation made below (12). Any valid gauge choice would lead to Dirac brackets that satisfy the strong imposition of both the Gauss constraint (18) as well as the gauge condition. Assuming that there is a gauge choice that yields (9), then it must satisfy the condition, noting that the canonical momenta and the electric field just differ by a sign. This does not hold as may be easily seen by applying the differential operator on the right side of (9). We find a contradiction, Thus the algebra (9) is untenable. To complete the picture, we choose an appropriate gauge and compute the symplectic structure. A valid gauge choice is given by, With such a choice the Dirac brackets are given by, 3 It is verified that this structure is compatible with the strong imposition of both the Gauss constraint (18) as well as the gauge condition (28). The matter sector Let us now introduce sources with J 0 and J i j coupling with A 0 and A i j respectively. We first discuss the pure matter sector, specifically the global symmetries [4], and then follow it up by the complete theory. Gauge invariance under (24) implies the conservation law, 4 This result ensures the conservation of the three charges (3) without any restriction on the sources. This is shown in some details using the global symmetries. It has the usual global symmetry, where, and J is the trace of J i j , 5 as shown above. This leads to the usual conserved charge, Further, it has a vector global symmetry with currents, which yields the conservation law, 3 Since a detailed computation of such brackets is provided in Sect. 4, here the result is just given. 4 Notation: temporal indices are denoted by 0 while Latin indices indicate space, the two are combined by using Greek indices. Temporal indices change sign on lowering or raising, spatial ones do not. Nonrelativistic physics is being discussed which is made transparent in the last equality of (30). 5 The trace of other variables are denoted similarly, B ii = B etc. that may be verified from (30). The conserved charge here is, Finally, there is another scalar charge which is different from the usual one (33). The global symmetry is here defined by the currents, and satisfies an identical conservation law as (31), which is verified by using (30). The corresponding conserved charge is given by, The three charges (33), (36), (38) are those mentioned in (3). 6 It is possible to extend this analysis for multipole moments. Additional conservation laws would emerge corresponding to these higher moments. It also shows a connection with the conventional approach using higher rank symmetric tensor fields. More conservation laws follow from the introduction of higher rank gauge fields that couple with corresponding higher rank tensor sources, leading to a generalisation of (30). Let us next gauge the usual global symmetry (30) and write the complete lagrangian as, where L 0 is the contribution from the matter sector and the electric and magnetic fields have been defined in (11). 7 The equations of motion of the gauge fields are given by, After this gauging, the current of the global symmetry (32), (33) may be corrected by improvement terms such that it trivialises, exactly as happens for the standard U (1) gauge field, 6 Note that, in usual literature, the conservation of the charge (38) is achieved only if the traceless condition is imposed J = J ii = 0 [11]. This is not necessary here. (See also the discussion below (3)). 7 In those cases where the symmetric field A i j can be written in terms of the conventional U (1) field as A i j = 1 2 (∂ i A j + ∂ j A i ), the coupling in (39) may be expressed as −A i ∂ j J i j , which is equivalent to the discussion in Seiberg [4]. This decomposition is possible if δ A i j = ∂ i ∂ j α. This is not true here as may be seen from (24). where use was made of the equations of motion (40). As a consistency check, it can be shown that the above currents satisfy the conservation law (31). 8 The traceless scalar charge theory In this section we analyse the traceless scalar charge theory. Apart from comparing with previous approaches and results, we use these findings to subsequently discuss diffeomorphism symmetry from which the physics of the lowest Landau problem emerges naturally. The lagrangian is defined by, which, as far as we are aware, was not considered earlier. The difference from the lagrangian (13) of the scalar charge theory is the presence of the last term that enforces the tracelessness of the tensor gauge field. The canonical momenta are given by, Only the first one is a true momentum while the others are all (primary) constraints which have to be implemented weakly, To get the secondary constraints we have to first write the total hamiltonian, where χ i are the multipliers enforcing the constraints i . Time conservation of the primary constraints 1 and 2 yield further constraints, The 3 constraint does not generate any new constraint since, along with 5 , it forms a second class pair. The other three 8 Incidentally the currents (31) and the charge (32) are defined only modulo the improvement terms [4], In the present example, X i = ∂ j E i j − 1 d ∂ i E, Y ji = ∂ 2 B ji , which yields (41). 1 , 2 , 4 are first class since their algebra closes with all the constraints. We mention in passing that, contrary to usual approaches, here E ii ≈ 0 is not any gauge generator and neither is A ii ≈ 0 any gauge fixing condition. This pair of second class constraints is eliminated by calculating the relevant Dirac brackets and the answer was given in (9). It is thus clear that the formulation of a dynamical model for the traceless theory is nontrivial. If we first perform a canonical (hamiltonian) analysis and then constrain by imposing the traceless condition as a gauge fixing condition, we fail, as shown in the earlier section. If, on the other hand, we first impose the traceless condition by hand in the lagrangian and then perform the canonical analysis, we succeed. This is a typical example where canonical analysis and imposition of constraints do not commute and is a well known feature in constrained dynamics. Not only that, in the latter case we reproduce the algebra (9). Significantly, in the former approach, the constraint E ii = 0, which is an essential companion of A ii = 0, never appears. After the strong imposition of the second class constraints, the only physically relevant first class constraints are given by, The gauge generator is now given by, where λ 0 , λ are gauge parameters. Since there is only one primary first class constraint (π 0 ≈ 0), there is only one independent gauge parameter. Using the method discussed before we find that λ 0 = −λ. The fields A 0 and A i j transform exactly like (24). At this point the consistency check discussed around (10) is recalled. This also holds here. If, on the other hand, the scalar charge theory was taken with the transformation (4), it would be incompatible with δ A ii = 0. This shows the need for making cross checks in the consistency of the formulation. The structure of the constraints shows a close resemblance to the Maxwell theory. This may be pushed further if we perform the gauge fixing, which may be considered the analogue of the radiation gauge ∂ i A i ≈ 0 for the Maxwell theory. Together, the Gauss constraint and the gauge condition form a second class pair m of constraints and are eliminated by computing relevant Dirac brackets. The A−π (A− E) algebra is modified. The relevant Dirac bracket is defined as, where the * * indicates the final Dirac bracket which is computed in terms of the * bracket, which is the Dirac bracket derived at the first stage of the analysis when the original second class constraints were eliminated. Effectively the * bracket takes over the role of the Poisson bracket in the usual definition of the Dirac bracket. The inverse that appears above is the inverse of the star bracket involving the constraints. Incidentally the relevant * bracket (i.e. the first level Dirac bracket) has been defined in (9). After some algebra the final result is obtained, where, and ensures the vanishing of the Dirac brackets, This shows that the constraints are now implemented strongly by the final Dirac brackets, so that, = 0, = 0. Of course these brackets also satisfy A ii = π ii = 0, which were the second class pair of constraints before any gauge fixing was done. It is useful to recall the example of the Maxwell theory where the Gauss constraint ∂ i π i ≈ 0 is fixed by the radiation gauge constraint ∂ i A i ≈ 0 and the expression for the Dirac bracket is, where the transverse delta function is defined as, satisfying, As we see the structure in the present case (52) is much more involved than the Maxwell example. The reasons are twofold: the presence of higher order derivatives and the occurrence of the traceless constraints A ii ≈ 0, π ii ≈ 0 which do not have any analogue in the Maxwell theory. Scalar charge theory in (2 + 1) dimensions and diffeomorphism symmetry Having discussed the issue of gauge fixing, we reconsider the theory (43) where the second class constraints were eliminated but the important Gauss constraint ≈ 0 (48) remained as a first class constraint. If we now specialize to (2 + 1) dimensions (i.e. d = 2) we find interesting physical consequences. One of these is discussed here where we are able to construct a theory that has diffeomorphism symmetry which may be interpreted as a theory of linearized gravity. Since the traceless constraint A ii = 0 in (43) is strongly imposed, it is possible to solve for it directly in terms of another symmetric, but not traceless, second rank field in the manner, The above construction ensures the symmetric and traceless nature of A i j , using only the symmetric nature of h i j . 9 The length scale l is introduced for dimensional reasons. Later, it will acquire a greater significance. Substituting in (43) we obtain the new lagrangian expressed in terms of the h field. The result is, where the electric and magnetic fields, computed from (11), are given by, Since the magnetic field has only one component B 12 , it is convenient to express it in the way done above. We now perform a canonical analysis of the above model. The canonical momenta are defined by, where the electric field is given in (61). A useful identity that will be used later on follows, There is one primary constraint, while the other is a true momenta. The total hamiltonian is now found to be, 9 The inverse relation involves the trace of h i j , where, and the primary constraint is enforced by the lagrange multiplier λ. Time conserving the primary constraint yields the Gauss constraint, There are no more constraints since time conservation of the Gauss constraint yields a vanishing result, The physical space is defined to be that space which is annihilated by the first class constraints, so that the total hamiltonian in the physical subspace, after using the identity (63), simplifies to, which takes on a familiar look. Volume preserving diffeomorphism symmetry and linearised gravity We next consider the gauge symmetries which will eventually lead to volume preserving diffemorphisms.The generator of the gauge transformations is given, as usual, by a linear combination of the first class constraints, where α 0 , α are the gauge parameters. Then the gauge variations are given by, where we have used the basic Poisson algebra (1) to compute the above brackets. Since there is only one primary first class constraint, there is one independent gauge parameter. As done earlier we get the relation between the parameters by taking the variation on either side of the electric field in (61). As the electric field is linearly related to the canonical momenta (62) it is obviously gauge invariant. Using the transformations (72), we find, which immediately yields α 0 = −α, so that the gauge generator takes the final form, The transformation on the A 0 and h fields may be expressed as, The second relation is exactly the transformation of a spatial metric under volume preserving diffeomorphisms, 10 x i → x i + η i because ∂ i η i = 0, as seen from (75). A similar transformation was discussed recently in [11] where the metric was traceless h ii = 0 leading to unimodular gravity. Our example is general (h ii = 0) and hence does not have this restriction. Instead of considering h i j as a gauge field it is possible to interpret it as the linear correction to g i j , expanded around a flat background, Under the infinitesimal volume preserving transformation mentioned earlier the metric transforms as, where the first piece is the transport term while the other two come from the form variation of the space components of a second rank tensor. If we now substitute (76) in (77) and retain terms in the leading order only, then the result (75) is reproduced. As is known, volume preserving transformations lead to a nonlinear realisation of the symmetry given by, where, in the current example, 11 as may be easily verified by using (77) and the definition of the transformation parameter given in (75). Similar conclusions hold for the transformation of the field A 0 . By including the transport term, its total variation is obtained from (75), which also satisfies the closure relation (78). So far we have been discussing generators and transformations related to the gauge sector. For the matter sector, the corresponding operators are the charge density ρ that couples with A 0 and the stress tensor T i j that couples with A i j . For the charge sector, the change of any matter field (x) under infinitesimal gauge transformations is given by, If we take the variation of the charge itself by putting = ρ in the above relation and ensure consistency with the closure relation for the diffeomorphisms (78) then the algebra of charges follows, We will now exploit these results to establish a connection with the lowest Landau level problem. To do that we first review the Landau problem. Connection with Landau problem in presence of a strong magnetic field In order to establish a clean connection with the physics of the lowest Landau level problem, we first consider the lagrangian of a charged particle moving in a plane under the influence of a constant magnetic field B , where we have set c = 1 and work in the radiation gauge (∂ i A i = 0), so that, and V is the potential from which other forces can be derived. The equation of motion following from the lagrangian is, which is the Lorentz force law. In the hamiltonian formulation, the conjugate momenta are given by, The canonical hamiltonian is obtained from the lagrangian by a standard Legendre transformation, where π i is the kinematical momenta obtained from the canonical momenta by a minimal substitution, The projection to the lowest Landau level is achieved in the strong magnetic field case. Then the mass term in (83) can be set to zero, leading to the lagrangian, This reduction to a first order system enables one to simply read off the brackets without entering into the elaborate Dirac procedure. The canonical pair is (x 1 , eBx 2 ) so that the basic algebra is given by, The equation of motion is the same as found from (85) by putting m = 0, These results are now rederived in the hamiltonian formulation. It is done not merely to establish compatibility but also to provide justification in those examples, one of which will be treated in Sect. 6.1, where a straightforward lagrangian approach is unavailable. The hamiltonian following from (89) is, It will reproduce the above equation of motion (91) provided we take the basic algebra as (90), Let us now start from the hamiltonian (87) in the m → 0 limit. To make the first term meaningful it is necessary to take π i = 0. One could argue that one might as well take π 2 i = 0. The actual justification for taking π i = 0 comes from (86) and (88) which shows that for m = 0 we have π i = 0. Now the first term in (87) has to be interpreted by initially setting the numerator to zero strongly in which case the hamiltonian reduces to the earlier result (92) derived directly from the lagrangian (89), so that consistency is retained. Once π i = 0 strongly, there is a clash among the various Poisson brackets. Thus it is necessary to work with Dirac brackets, interpreting π i ≈ 0 as a pair of second class constraints [16]. The Poisson algebra among this pair is, Now the Dirac brackets (denoted by a star) among the coordinates is obtainable using the definition, where C kl = (eB) −1 kl is the inverse of (94). The result is, which reproduces (90). Thus, given a hamiltonian like (87) it is possible to compute the relevant Dirac brackets by this approach, even if the lagrangian is not known. It is now feasible to make contact with the volume preserving diffeomorphisms satisfying the nonlinear closure (78) and (79) discussed in the previous section. The algebra of the parameters (79) is now lifted to a commutator, which may be expressed in terms of the algebra of the coordinates as, Comparing (97) and (98) yields, Identifying, the parametric algebra associated with the volume preserving diffeomorphisms becomes identical with the algebra (90) 12 of the lowest Landau level problem. Physics of lowest Landau level problem and algebra of charges The nontrivial charge algebra (82) is a characteristic of noncommuting coordinates. If the coordinates were commuting, the charge algebra would be trivial, i.e. vanishing. Indeed such a noncommutative algebra has appeared naturally in the context of noncommutative fluid dynamics and magnetohydrodynamics. Moreover, since fluid dynamics can be interpreted as an example of a volume preserving diffeomorphism invariant theory, it is possible to understand the relation (82) from that point of view. In the hamiltonian formulation of Eulerian fluids, the particle coordinate is denoted byX i (t) where i labels the particle. Then the charge density is given by, where, for simplicity, the mass parameter has been set to unity and N is the number of particles. The discrete particle labels may be replaced by continuous spatial arguments (omitting time), A volume integral of the density ρ yields the total mass which has been normalised to unity. If the coordinates commute the 12 The classical bracket is lifted to a commutator by multiplying it with i. charge algebra vanishes. However, if we take the algebra among the coordinates that is relevant for the lowest Landau level problem (96), so that, which is the field theoretic analogue of (96), lifted to a commutator, we obtain [16], which reproduces (82) after the identification (100) is used. It is also possible to construct noncommutative magnetohydrodynamics such that the cherished charge algebra (82) or (104) is obtained. This has a close parallel with the physics of the lowest Landau level problem including the corresponding Dirac analysis. The equations governing the motion of a charged fluid with density ρ and mass parameter m (introduced for dimensional purpose) moving on a plane with velocityv, subjected to a constant external magnetic field B perpendicular to the plane, are given by the continuity equation, and the Euler equation, where extra forces F i are defined from a potential [16], The continuity and Euler equations (105), (106) are obtained by taking the Poisson brackets of ρ and v i with the hamiltonian, provided the brackets among the basic variables are taken as, where, is the vorticity of the fluid. For a strong magnetic field the mass parameter goes to zero as may be seen from (106). In that case, for a meaningful hamiltonian (108) to exist, the momenta π i should vanish. A more clear cut justification for this was given in the basic quantum mechanical Landau problem. 13 Such a lagrangian is nonexistent here but the hamiltonian has a similar structure. Putting π i = 0 directly in the above algebra leads to inconsistencies. Hence recourse is taken to the Dirac analysis of constraints. The constraint π i ≈ 0 is implemented weakly. In fact it forms a pair of second class constraints. These may be strongly imposed by calculating the relevant Dirac brackets. The ρ − ρ Dirac bracket, elevated to a commutator, is precisely (104) [16]. Using this algebra the appropriate equations of motion are reproduced by taking the hamiltonian as, obtained by putting π 2 i = 0 in (108), 14 Conclusions We have given a hamiltonian analysis of higher rank symmetric gauge theories, focusing on aspects that were either partially or, not highlighted. Instead of introducing constraints and transformation laws by hand, we proceed from a higher derivative lagrangian and generate these by adopting Dirac's algorithm of constrained systems. In this way we do not miss any constraints, either first or second class. Neither is there any lack of uniqueness or consistency. Of particular interest is the (2 + 1) dimensional traceless scalar charge theory which was treated here in a different way by first imposing the traceless condition A ii = 0 in the lagrangian by means of a multiplier and then doing the canonical analysis. This is important since the canonical analysis and imposition of the traceless constraint are noncommutative, as explained in details below (47). The theory led to first and second class constraints, both of which have distinct roles. Solving the traceless constraint explicitly by expressing A i j in terms of another (traceful) field (h i j ), it was found that the new theory was equivalent to linearised gravity with volume preserving diffeomorphisms. Explicit forms for the action, constraints and the transformations rules were found in the theory describing linearised gravity. A direct connection of this symmetry with that in the lowest Landau level problem was shown. The modified charge algebra was identical to that found in noncommutative fluid dynamics or in magnetohydrodynamics in presence of a strong magnetic field. The systematic analysis of constraints done here may be extended in other directions. One possibility is the inclusion of higher derivative Chern-Simons terms and study their effects. In standard gauge theories their inclusion has led to
v3-fos-license
2016-06-17T12:14:29.159Z
2012-10-31T00:00:00.000
6651295
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncir.2012.00105/pdf", "pdf_hash": "93586526a4030ac98ebe7a48b83d34c24503c8e0", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:131", "s2fieldsofstudy": [ "Biology" ], "sha1": "93586526a4030ac98ebe7a48b83d34c24503c8e0", "year": 2012 }
pes2o/s2orc
High-density microelectrode array recordings and real-time spike sorting for closed-loop experiments: an emerging technology to study neural plasticity Understanding plasticity of neural networks is a key to comprehending their development and function. A powerful technique to study neural plasticity includes recording and control of pre- and post-synaptic neural activity, e.g., by using simultaneous intracellular recording and stimulation of several neurons. Intracellular recording is, however, a demanding technique and has its limitations in that only a small number of neurons can be stimulated and recorded from at the same time. Extracellular techniques offer the possibility to simultaneously record from larger numbers of neurons with relative ease, at the expenses of increased efforts to sort out single neuronal activities from the recorded mixture, which is a time consuming and error prone step, referred to as spike sorting. In this mini-review, we describe recent technological developments in two separate fields, namely CMOS-based high-density microelectrode arrays, which also allow for extracellular stimulation of neurons, and real-time spike sorting. We argue that these techniques, when combined, will provide a powerful tool to study plasticity in neural networks consisting of several thousand neurons in vitro. INTRODUCTION The understanding of neural circuits and their activities is to a major extent based on measurements with extracellular electrodes. This is due to the fact that extracellular recordings are relatively easy to perform and very well established. In contrast to single cell measurements with intracellular recording techniques, extracellular electrodes pick up the action potentials (spikes) of all neurons in their vicinity. This is a blessing as well as a curse. An advantage is that in principle several neurons can be measured simultaneously using a single extracellular electrode, but the price to pay is the need to assign single spikes to their putative neuronal sources. This problem is referred to as spike sorting and it is known to be difficult and error-prone (Lewicki, 1998), and spike sorting often involves a highly time consuming, manual component. Depending on the experiment, time consuming spike sorting can be regarded as a mere inconvenience, and many studies have focused on the development of spike sorting algorithms for the offline analysis of the recordings after performing the experiment (see e.g., Letelier and Weber, 2000;Shoham and Fellows, 2003;Delescluse and Pouzat, 2006). For real-time closedloop experiments and brain machine interfaces (BMI), however, it is absolutely necessary to obtain spike trains already during the recording so that time consuming spike sorting is not only a problem but essentially prohibits performing such experiments. Therefore, spike sorting is usually avoided in those experiments by detecting just the presence of action potentials, e.g., by applying a voltage threshold, which can be relatively easy and efficiently implemented also in hardware (Guillory and Normann, 1999). Real-time spike detection allows for studying closed-loop feedback of neural activity, for example, through the implementation of visual feedback to an awake monkey (Fetz, 1969), or by applying electrical stimulation to neurons in an awake animal (Jackson et al., 2006). Electrical stimulation of neurons that depends on the activity of other neurons (see also Figure 1) was also successfully used in neural cultures on top of multi-electrode arrays (MEAs): electrical feedback stimuli have been used to control the bursting activity of cultured neurons in Wagenaar et al. (2005) and the connection strengths between neurons in Müller et al. (in review). The closed-loop approach can also be used to connect a neural network to a robot (Bontorin et al., 2007;Potter, 2010). For a review of real-time closed-loop electrophysiology see, e.g., Arsiero et al. (2007). These studies, however, were all realized without using spike sorting, either by limiting the number of single neurons that were recorded from (by trying to detect only one specific neuron per electrode), or by using multi-unit activities. Recent developments in measurement techniques and in spike sorting algorithms make it now possible to overcome some of the limitations of extracellular recordings. A possible setup using spike sorting for closed-loop stimulation of specific neurons is shown in Figure 1. To use the closed loop, e.g., to investigate spike-timing-dependent plasticity, the real-time spike-sortinginduced latency may not exceed a few milliseconds. In the following, we will review the advances in MEA recording technology with a special focus on high-density MEAs and show FIGURE 1 | Principle of real-time closed-loop experiments with spike sorting. Sketch of a potential real-time closed-loop stimulation on an HDMEA, combined with spike sorting. The electrical activity of three neurons (colored triangles) is measured by a high-density array of electrodes (light blue squares). First, the recorded signal is bandpass-filtered. In a second step, spike sorting is applied to compute the spike times of the single neurons. Depending on the sorted spike trains and the stimulation logic, the postsynaptic neuron (N3) is stimulated (Müller et al., in review). If the stimulation latency (t delay ) is short enough, the stimulation can be timed with respect to the arrival of the action potentials of N1 and N2 at their synapses to N3 (t syn ). This can be used to change the synapse characteristics via spike-timing-dependent plasticity (Feldman, 2012). Parts of this graph were adopted from Einevoll et al. (2011). that the high-density of the electrodes provides unprecedented signal quality that holds the promise to enable clear and reliable assignment of single spikes to putative neurons (Litke et al., 2004;Prentice et al., 2011;Jäckel et al., 2012). MEA RECORDING TECHNOLOGY Planar MEAs are two-dimensional arrangements of recording electrodes for in vitro extracellular measurements of cultured neuronal cells or slice preparations. They allow for recording of electrical activity simultaneously on many electrodes at high temporal resolution. Thus, they represent an important tool to study the dynamics in neuronal networks (e.g., Potter et al., 2006;Bontorin et al., 2007;Chao et al., 2007;Rolston et al., 2010;Müller et al., in review). An important parameter of MEAs is the inter-electrode distance (IED). For multi-electrode arrangements on shafts of needles, such as tetrode configurations (Eckhorn and Thomas, 1993;O'Keefe and Recce, 1993), this distance is small enough (less than 20 µm) that a single action potential can be simultaneously detected on several electrodes. The maximal distance between a neuron and an electrode, at which the action potentials of the neuron can be still measured, is assumed to be smaller than 50-70 µm although this greatly depends on the recording setup and the respective preparation (Buzsáki, 2004;Frey et al., 2009b). For traditional, commercially available MEAs, however, the IED was usually much larger [100-200 µm IED and 60-200 metal electrodes on a glass substrate (Stett et al., 2003)] so that MEA recordings constituted, in principle, multiple simultaneous single-electrode recordings. In other words, the distance between the electrodes was too large to detect activity of the same single neuron on multiple electrodes. From the signal processing point of view, this is an unfavorable recording situation, as recording the same action potential with more than one electrode was shown to strongly increase spike sorting performance (Gray et al., 1995). Furthermore, many neurons will lie in between electrodes and not be measured at all. To ensure that neurons lie close to the electrodes, additional measures can be taken during the preparation of the cultures, such as patterning the cells at electrode locations (Shein et al., 2009), but this adds complexity to the experimental procedure. Recent advances in microtechnology, especially the realization of MEAs in complementary metal-oxide-semiconductor (CMOS) technology (Berdondini et al., 2009;Lambacher et al., 2010;Hierlemann et al., 2011), made it possible to greatly increase the number of electrodes per MEA, for example to 4096 in Berdondini et al. (2009), 11,011 in Frey et al. (2010 or 16,384 in Lambacher et al. (2010), while decreasing the IED to less than 20 µm, a distance comparable to that of the previously mentioned electrode ensembles on needles (e.g., tetrodes). Additionally, this technology provides increased signal quality through on-chip amplification and digitization circuits. Using on-chip multiplexing schemes, high-density MEAs (HDMEA) systems have been realized, which enable to read out large numbers of electrodes, arranged at high spatial density (Eversmann et al., 2003;Berdondini et al., 2005;Hutzler et al., 2006;Frey et al., 2009a). The closely spaced microelectrodes of HDMEAs enable that virtually every neuron on the array is detected by multiple electrodes. Along with the additional information where the signal originated from, the high electrode density greatly improves spike sorting (Gray et al., 1995;Harris et al., 2000;Einevoll et al., 2011;Prentice et al., 2011). Figure 2 shows an example of such a recording. However, HDMEAs do not only improve recording but also stimulation capabilities. Localized, reliable stimulation of single cells (Hottowy et al., 2012) is a powerful tool for plasticity experiments (Müller et al., in review). Indeed, subcellular sized electrodes have been shown to provide reliable stimulation of individual neurons in vitro. This has been demonstrated using MEAs with particularly high electrode densities that feature only stimulation capabilities, such as (Braeken et al., 2010;Lei et al., 2011). Procedures how to optimally stimulate a given neuron by using multiple electrodes and complex stimulation patterns are currently under investigation. HDMEAs featuring recording and stimulation circuitry (Frey et al., 2010;Eversmann et al., 2011) combine the advantages of reliable spike sorting and localized single neuron stimulation, Frey et al. (2009a), however, with cultured cortical neurons). Spikes of individual neurons are recorded by multiple electrodes. Colored traces are identified spikes from two neurons. Note that on the trace of electrode 4, the two spikes are hardly distinguishable and that only combining the information of different channels enables unambiguous spike assignment, see also . (Right top) Several superimposed spike traces of the two neurons. The colored traces are the spike-triggered averages (STAs) of the two neurons on the respective electrodes. The templates of the two neurons (green and violet) spatially overlap (right bottom) indicating that the same set of electrodes recorded from both neurons. (B) Spikes (left) and templates (right) for 10 identified neurons (colored traces). For each neuron, the electrode was chosen, where its template had the largest peak-to-peak amplitude (indicated by the colored arrows in the right panel). Note that some of the spikes are visible on more than one electrode (three channels marked by asterisks) and that high-amplitude spikes on one electrode can overlap with spikes on another electrode. Right: for illustration purposes the identified templates are superimposed onto a MAP2 staining of the culture they were recorded from Bakkum et al. (in review). Note that the electrodes have a similar IED than the distance between neurons. Frontiers in Neural Circuits www.frontiersin.org December 2012 | Volume 6 | Article 105 | 3 which paves the way to truly bidirectional experiments on single-cell level within the network context. REAL-TIME SPIKE SORTING ALGORITHMS The overall spike sorting process consists of a number of nontrivial processing steps (for a schematic of the spike sorting process see, e.g., Einevoll et al., 2011). First, spikes need to be detected in the noisy signals. For multi-electrode-shaft and HDMEA recordings, a single action potential can be detected on multiple electrodes. Then, a short piece of data is usually cut out around the detected events (potentially on multiple electrodes) and structured into a vector in a high dimensional space. Spike features are then extracted from this piece using, e.g., principle component analysis (Lewicki, 1998). This step aims at reducing the dimension of the vector space in order to keep dimensions that carry most information about the origin of the spikes and to remove dimensions that only carry noise. The goal of the feature space representation and dimensionality reduction is that spikes from the same neuron, i.e., appear to be similar to each other, are located closely together while being distant from spikes of other neurons. The most demanding step, achieved by using a clustering routine, is to determine how many neurons were recorded from, and which spike was produced by which neuron. Since most standard spike sorting procedures (e.g., Harris et al., 2000;Shoham and Fellows, 2003;Quiroga et al., 2004) need to store all individual spikes before the clustering step, they are not applicable for online spike sorting with the notable exceptions of Öhberg et al. (1996), where a neural network is used for real-time spike sorting, and (Rutishauser and Schuman, 2006), where the clusters are formed in an online procedure. The output of the spike sorting consists of the number of neurons, the individual neuronal spike trains, and the prototypic spike waveforms (called templates) for every neuron. Since some data from a certain preparation can already be recorded and stored prior to a specific experiment, templates can be pre-computed using an offline spike sorter. This way, fast and efficient classifiers can be designed based on stored templates that are able to sort spikes in real-time. It does not come as a surprise that almost all research efforts in the direction of real-time spike sorting follow this approach (Friedman, 1968;Mishelevich, 1970;Roberts and Hartline, 1975;Stein et al., 1979;Salganicoff et al., 1988;Yang and Shamma, 1988;Gozani and Miller, 1994;Santhanam et al., 2004;Asai et al., 2005;Takahashi and Sakurai, 2005;Vollgraf et al., 2005;Biffi et al., 2010;Franke, 2011), although not all of these approaches explicitly make use of templates to derive spike classifiers. So far, real-time spike sorting was mainly achieved by deriving simple hardware-implementable decision rules, based on the spike templates. One such rule is to check, if the spike voltage sample at a given time lies between a lower and an upper threshold relative to the peak of the spike waveform (a so called hoop), as described in Santhanam et al. (2004). Such decision rules are also used in commercially available recording systems and were individually applied to single electrodes (Nicolelis et al., 1997;Wessberg et al., 2000;Taylor et al., 2002;Guenther et al., 2009). However, there have been only few applications of these approaches to multielectrode arrays in real-time scenarios, such as Takahashi and Sakurai (2005), where independent-component analysis was used to separate individual neuronal activities. The information of several recording channels must be efficiently combined for multi-electrode recordings. Extending a spike sorting method that works for single electrodes to multi-electrodes is not a trivial task and might not be possible for all methods. As already discussed, HDMEAs impose even higher demands on the methods due to the large overall number of simultaneously recorded neurons and the large number of electrodes that are available per single neuron. There are a number of approaches to spike sorting of HDMEA data (Meister et al., 1994;Litke et al., 2004;Jäckel et al., 2011Jäckel et al., , 2012Prentice et al., 2011;Fiscella et al., 2012) but none of those has been evaluated with respect to low latency real-time spike sorting so far. There is also no commercial system with real-time spike sorting available, and it is currently unclear how effective the application of the "hoop"-approach (Santhanam et al., 2004) is. Another ICA-based real-time approach has been described in Takahashi and Sakurai (2005), but the performance of ICA to separate all neurons of HDMEA data sets was found to be limited . LINEAR FILTERS FOR SPIKE SORTING Linear-filter-based spike sorting approaches rely on linear filters that preferentially respond to one template that is considered to represent spikes from a single neuron (Roberts and Hartline, 1975;Stein et al., 1979;Gozani and Miller, 1994;Vollgraf and Obermayer, 2006;Franke et al., 2010;Franke, 2011). Spikes can then be detected by thresholding the filter outputs. An alternative method was suggested in Vollgraf et al. (2005), where a preprocessing filter was designed to be tuned to the average spike waveform of all spikes. However, detected spikes have subsequently to be clustered in the filter output space, which introduces a complex problem after the filtering. Filter-based methods hold the promise to be suitable even for low-latency real-time spike sorting of MEA: linear filters can be efficiently implemented in hardware and they scale well with the number of recording electrodes. Firstly, all electrodes can be processed in parallel, and, secondly, if spikes of one neuron cannot be detected on a given electrode, this electrode can be ignored for the corresponding filter (Jäckel et al., 2011). It was argued that linear-filter-based spike sorting provides only moderate performance in terms of sorting quality (Wheeler and Heetderks, 1982;Lewicki, 1994;Guido et al., 2006), but it was shown more recently that this could be due to the fact that the candidate filters have been derived in the frequency domain, which was shown to be non-optimal (Vollgraf and Obermayer, 2006). REAL-TIME IMPLEMENTATION Numeric computations behind linear filters are based on multiply-accumulate (MAC) operations. For every recording electrode, a set of filter coefficients has to be multiplied with the most recent samples of the recordings, and all multiplications over all electrodes are then summed up. Since multiplications are independent of each other, they can be done in parallel on a digital signal processor (DSP) as a single processing step. DSPs are well suited for implementing MAC-based algorithms, but filter-based spike sorting algorithms can consist of more complex operations [like buffering the filter outputs, thresholding, and estimation of the filter with the maximal output (Franke, 2011)], which requires more flexibility than provided by DSPs. Such more complex operations can, however, be implemented by using field-programmable gate arrays (FPGAs). The digital interface of a MEA can be controlled by these fast and reprogrammable microcontrollers. By integrating data analysis modules, as well as stimulation logics directly on the FPGA, the complete closed-loop experiment can be realized in "programmable hardware" (Hafizovic et al., 2007). This obviates the necessity to route the signal path through a PC, which would increase latency and jitter. Another advantage of FPGAs is the relatively large available memory to store filter coefficients. OVERLAPPING SPIKES When two spikes occur nearly at the same time, they can cause problems for the spike sorting: The overlapping signals could be detected as a single spike instead of being recognized as two spikes, and the distorted overall waveform can lead to misclassifications. With multi-electrode recordings, there can be two different types of spike overlaps: (1) temporal overlaps include spikes that occur nearly at the same time but on different electrodes, while (2) spatio-temporal overlaps occur nearly at the same time and also on the same electrodes. Purely temporal overlaps do not cause any problems for filter-based methods, as the filters corresponding to one neuron can be made "blind" to the electrodes of another neuron and can be treated separately. Spatio-temporal overlaps (see Figure 2), however, will distort the filter outputs of both filters. A way to solve this problem is to remove the corresponding waveform from the data, once a spike was detected, and to then recompute the filter outputs (Gozani and Miller, 1994;Franke, 2011). This approach is not well suited for a challenging realtime implementation, since it will generate a larger delay for overlapping spikes than for non-overlapping ones. The realization of an efficient overlap resolution technique for highelectrode-density data of real-time applications is still an open issue. DISCUSSION/OUTLOOK A number of issues in implementing real-time spike sorting still remain unsolved. It would be desirable to make the linear filters as short as possible to achieve the smallest possible delay (the delay of a causal filter is directly related to its length) (Vollgraf and Obermayer, 2006). However, it was not investigated yet, how short the filters for HDMEA recordings can be, while still ensuring a high spike sorting quality. Furthermore, the filters described in Roberts and Hartline (1975) are, in principle, more powerful than a simple matched filter (Vollgraf et al., 2005;Franke, 2011), since they try to suppress spikes from other neurons. This may be useful to resolve overlapping spikes but comes at a price: the filters might be less robust to noise, since they are under stronger constraints. Additionally, spike waveforms of two different neurons may not necessarily be linearly independent, which poses a problem for this kind of linear filters. Given the high spatial resolution of HDMEAs, it will be interesting to investigate, how the quality of the results obtained by using simple spike sorting algorithms compares to that of more complex ones. Promising algorithms for use with high electrode density include the aforementioned "hoop"-approach (Santhanam et al., 2004), or a sorting that is solely based on the identities of the electrodes, on which a spike was detected. An important issue for spike sorting is the occurrence of bursts. Here, a neuron produces potentially many spikes with successively decreasing amplitudes and, possibly, varying waveforms (Fee et al., 1996). For most algorithms, it is not known, how the spike sorting error rate is affected by bursts. HDMEAs seem to offer the potential to correctly sort spikes according to their relative amplitude distribution over many electrodes, which may be a robust feature also preserved during bursts (Rinberg et al., 1999). HDMEAs are a valuable tool to study neural networks, and in combination with real-time spike sorting, hold great promise for new closed-loop experiments to study, e.g., neural plasticity. We have discussed the potential applicability of spike-sorting algorithms for this purpose and come to the conclusion that the combination of hardware-optimized algorithms with HDMEA recordings may possibly enable high performance spike sorting of more than hundred neurons with latencies in the range that is required to stimulate and control synaptic plasticity (Feldman, 2012). This may allow for experiments similar to those reported in Fetz (1969); Jackson et al. (2006);Bontorin et al. (2007); Rebesco et al. (2010), however, with the possibility to use sophisticated feedback stimuli upon occurrence of defined signature signals of single neurons within a local population.
v3-fos-license
2022-12-04T17:52:18.788Z
2022-12-01T00:00:00.000
254217846
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/1424-8220/22/23/9345/pdf?version=1669866222", "pdf_hash": "f408d9a95262983c8acb10c16e0c290181de7dde", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:132", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "4ac60ba91ec6a73606e8978daec5f713045d6c62", "year": 2022 }
pes2o/s2orc
Traffic Sign Recognition Based on the YOLOv3 Algorithm Traffic sign detection is an essential component of an intelligent transportation system, since it provides critical road traffic data for vehicle decision-making and control. To solve the challenges of small traffic signs, inconspicuous characteristics, and low detection accuracy, a traffic sign recognition method based on improved (You Only Look Once v3) YOLOv3 is proposed. The spatial pyramid pooling structure is fused into the YOLOv3 network structure to achieve the fusion of local features and global features, and the fourth feature prediction scale of 152 × 152 size is introduced to make full use of the shallow features in the network to predict small targets. Furthermore, the bounding box regression is more stable when the distance-IoU (DIoU) loss is used, which takes into account the distance between the target and anchor, the overlap rate, and the scale. The Tsinghua–Tencent 100K (TT100K) traffic sign dataset’s 12 anchors are recalculated using the K-means clustering algorithm, while the dataset is balanced and expanded to address the problem of an uneven number of target classes in the TT100K dataset. The algorithm is compared to YOLOv3 and other commonly used target detection algorithms, and the results show that the improved YOLOv3 algorithm achieves a mean average precision (mAP) of 77.3%, which is 8.4% higher than YOLOv3, especially in small target detection, where the mAP is improved by 10.5%, greatly improving the accuracy of the detection network while keeping the real-time performance as high as possible. The detection network’s accuracy is substantially enhanced while keeping the network’s real-time performance as high as possible. Introduction Currently, automated driving and intelligent transportation systems (ITS) are the principal applications for traffic sign detection and identification technologies. It can give drivers and autonomous vehicles crucial traffic information so that the latter can make judgments in accordance with the regulations of the road or alert and direct drivers' operation behaviors in time to reduce traffic accidents. Traffic signs can be broadly divided into three categories: directional signs, warning signs, and prohibition signs. These signs are round or triangular in design, and they are red, yellow, and blue in color. Therefore, classic traffic sign recognition typically uses machine learning techniques to recognize traffic signs or extracts information such as color and shape from traffic signs. Color segmentation to extract characteristics before classification identification is used in color-based traffic sign detection, which is easily affected by lighting variations. Color segmentation is not influenced by brightness variations, according to a previous literature [1], and uses HIS space to examine only hue and saturation. Due to the high demands of color recognition on variables such as weather and detection distance, the detection approach based on color features can be employed for high-definition image recognition but not for grayscale image recognition [2]. A shape-based traffic sign identification approach on grayscale images was proposed in another literature [3], which transforms triangle traffic sign detection into simple line segment detection, which can properly recognize traffic signs and is unaffected by distance. A support vector machine-based traffic sign detection and recognition system was proposed in another literature [4], which uses the generalization property of a linear support vector machine to first segment the color of traffic signs and then classify the form. The method of detecting color and shape features separately first performs color segmentation to obtain the region of interest, and if the region of interest is not detected, the shape-based detection is no longer performed; second, color segmentation requires a fixed threshold to be set manually, making traffic sign detection complicated and time-consuming. To solve these issues and increase detection performance, one study [5] used the AdaBoost framework to perform simultaneous color and shape modeling detection. Changes in external conditions, such as light, traffic sign color changes, and so on, can affect color-and shape-based traffic sign detection. The detection impact is unstable, impairing the traffic sign recognition system's performance and making it vulnerable to traffic sign leakage and false detection. Neural networks are being used more frequently to detect targets as deep learning technology advances; examples of these algorithms include Faster R-CNN [6], SSD [7], and YOLO [8], etc., which are primarily separated into single-stage and two-stage detection approaches. A previous study [9] presented an enhanced detection network based on YOLOv1 to address the issues of low accuracy and slow detection speed of standard traffic sign detection methods. This network enhanced traffic sign detection speed and lowered the hardware requirements of the detection system. Another study [10] suggested a traffic sign detection approach based on enhanced Faster-RCNN, with a 12.1% improvement in mAP, which successfully addressed issues such as low recognition efficiency and raised the precision of traffic sign detection and recognition. In [11], the CCTSDB dataset was obtained by expanding the Chinese Traffic Sign Dataset (CTSD) and updating the marker information based on the improved YOLOv2 target detection algorithm. The CCTSDB dataset only contained three categories of traffic signs, which is insufficient to complete the challenging task of traffic sign recognition. The TT100K [12] dataset, created by Tsinghua University and Tencent in collaboration, was extracted from the Chinese Street View panorama and covers a wide range of lighting and weather conditions, making it more representative of the actual driving environment. Study [13] used DenseNet instead of ResNet in the backbone network of YOLOv3 and experimentally validated it on the TT100K dataset. The algorithm improves the real-time performance of the detection model, but the accuracy and recall tend to be low when it comes to small targets such as traffic signs, which implies serious misdetection. The detection task frequently gets more challenging in target detection tasks, since the target to be detected is typically large, and its features can be easily extracted. Due to the FPN structure that YOLOv3 introduces, it is now able to detect targets at various scales by utilizing multi-scale feature fusion, which is appropriate for complicated traffic scenes and has shown some promise in the detection of small targets. However, there is still some room for improvement for the high-resolution images of the TT100K traffic sign dataset. In conclusion, the neural network-based approach can successfully address issues with low recognition efficiency, missed detection, and false detection while also enhancing the precision of traffic sign detection and recognition. Neural network-based methods have better accuracy or faster detection than traditional methods but cannot obtain both detection speed and detection accuracy. In addition, most traffic sign detection uses the German Traffic Sign Dataset (GTSDB), and traffic signs in Germany are different from those in China; there are fewer studies on traffic sign detection and recognition in China. Therefore, to address the problems in the above methods, this paper uses the TT100K dataset to train and detect Chinese traffic signs and improve and adjust the YOLOv3 network, mainly with the following improvements: (1) Add a fourth feature prediction scale of 152 × 152 size to the YOLOv3 network structure to take full advantage of the shallow features in the network to anticipate small targets. To achieve the fusing of local and global features, the spatial pyramid pooling structure is fused. (2) The distance between target and anchor, overlap rate, and scale are all taken into account when using DIoU loss for faster convergence and more consistent target frame regression. This makes the target frame regression more stable. (3) The majority of the traffic signs in the TT100K dataset are small-and medium-sized targets, with only a few large targets. As a result, using the original anchor is not a viable option. The K-means clustering algorithm is used to recalculate 12 anchors for the TT100K dataset, and the data augmentation strategy is used to balance and increase the dataset's imbalanced number of target categories. The YOLOv3 Algorithm YOLOv3 [14] is Redmon's improved, single-stage target detection algorithm based on YOLOv2, which has improved detection accuracy and real-time performance, and outperforms other algorithms in terms of speed and accuracy. YOLOv3 is currently the most popular algorithm in the YOLO family and is widely used in real detection scenarios [15]; the YOLOv3 network structure is shown in Figure 1. TT100K dataset to train and detect Chinese traffic signs and improve and adjust the YOLOv3 network, mainly with the following improvements: (1) Add a fourth feature prediction scale of 152 × 152 size to the YOLOv3 network structure to take full advantage of the shallow features in the network to anticipate small targets. To achieve the fusing of local and global features, the spatial pyramid pooling structure is fused. (2) The distance between target and anchor, overlap rate, and scale are all taken into account when using DIoU loss for faster convergence and more consistent target frame regression. This makes the target frame regression more stable. (3) The majority of the traffic signs in the TT100K dataset are small-and medium-sized targets, with only a few large targets. As a result, using the original anchor is not a viable option. The K-means clustering algorithm is used to recalculate 12 anchors for the TT100K dataset, and the data augmentation strategy is used to balance and increase the dataset's imbalanced number of target categories. The YOLOv3 Algorithm YOLOv3 [14] is Redmon's improved, single-stage target detection algorithm based on YOLOv2, which has improved detection accuracy and real-time performance, and outperforms other algorithms in terms of speed and accuracy. YOLOv3 is currently the most popular algorithm in the YOLO family and is widely used in real detection scenarios [15]; the YOLOv3 network structure is shown in Figure 1. The complete convolutional structure used by YOLOv3 is not constrained by the size of the image input. The pooling and fully connected layers are removed from the entire network structure, and a convolutional layer with a step size of 2 is used instead of the pooling layer for the downsampling operation, which prevents the loss of target information during pooling and facilitates the detection of small targets [16]. In addition, YOLOv3 replaces the DarkNet-19 network structure of YOLOv2 with the DarkNet-53 feature extraction layer. The complete convolutional structure used by YOLOv3 is not constrained by the size of the image input. The pooling and fully connected layers are removed from the entire network structure, and a convolutional layer with a step size of 2 is used instead of the pooling layer for the downsampling operation, which prevents the loss of target information during pooling and facilitates the detection of small targets [16]. In addition, YOLOv3 replaces the DarkNet-19 network structure of YOLOv2 with the DarkNet-53 feature extraction layer. The DarkNet-53 network, which successfully resolves the gradient problem of the deep network and the loss of original information during the multi-layer convolutional operation to better extract features and improve detection and classification [17], borrows the residual network structure of ResNet [18] and uses the original output of the previous layer as part of the input in the latter layer of the network. As shown in Figure 2, the residual module in YOLOv3 consists of two convolutional layers and a shortcut layer. The DarkNet-53 network, which successfully resolves the gradient problem of the deep network and the loss of original information during the multi-layer convolutional operation to better extract features and improve detection and classification [17], borrows the residual network structure of ResNet [18] and uses the original output of the previous layer as part of the input in the latter layer of the network. As shown in Figure 2, the residual module in YOLOv3 consists of two convolutional layers and a shortcut layer. Furthermore, YOLOv3 uses the notion of a feature pyramid network (FPN) [19] and introduces the feature pyramid network to forecast feature maps at three scales, with detection scales of 13 × 13, 26 × 26, and 52 × 52. The method of feature extraction by the convolutional neural network is bottom-up in the FPN network, and the process of upsampling the convolutional layer feature maps is top-down, as shown in Figure 3. Deep convolutional layers with wide sensory fields are appropriate for predicting large targets, whereas shallow convolutional layers with small sensory fields are suitable for predicting small targets. The properties of the two layers are combined by lateral connection. As a result, YOLOv3 is capable of predicting objects of varying sizes and is ideal for a variety of sophisticated application scenarios. Spatial Pyramidal Pooling Structure The spatial pyramid pooling (SPP) structure [20] solves the problem of repeated extraction of image features by convolutional neural networks and greatly improves the detection efficiency; the SPPNet network structure is shown in Figure 4. To ensure that the resolution of the input image matches the feature dimension of the fully connected layer in a neural network with a fully connected layer, region cropping and scaling operations on the input image are required. Scaling and cropping processes will result in the loss of picture feature information, lowering detection accuracy and affecting detection outcomes; however, scaling and cropping processes will result in the loss of picture feature information, lowering detection accuracy and affecting detection results, whereas SPPNet can overcome the limitation of the fixed size of the input image, saving the computational cost [21]. Furthermore, YOLOv3 uses the notion of a feature pyramid network (FPN) [19] and introduces the feature pyramid network to forecast feature maps at three scales, with detection scales of 13 × 13, 26 × 26, and 52 × 52. The method of feature extraction by the convolutional neural network is bottom-up in the FPN network, and the process of upsampling the convolutional layer feature maps is top-down, as shown in Figure 3. 2, the residual module in YOLOv3 consists of two convolutional layers and a shortcut layer. Furthermore, YOLOv3 uses the notion of a feature pyramid network (FPN) [19] and introduces the feature pyramid network to forecast feature maps at three scales, with detection scales of 13 × 13, 26 × 26, and 52 × 52. The method of feature extraction by the convolutional neural network is bottom-up in the FPN network, and the process of upsampling the convolutional layer feature maps is top-down, as shown in Figure 3. Deep convolutional layers with wide sensory fields are appropriate for predicting large targets, whereas shallow convolutional layers with small sensory fields are suitable for predicting small targets. The properties of the two layers are combined by lateral connection. As a result, YOLOv3 is capable of predicting objects of varying sizes and is ideal for a variety of sophisticated application scenarios. Spatial Pyramidal Pooling Structure The spatial pyramid pooling (SPP) structure [20] solves the problem of repeated extraction of image features by convolutional neural networks and greatly improves the detection efficiency; the SPPNet network structure is shown in Figure 4. To ensure that the resolution of the input image matches the feature dimension of the fully connected layer in a neural network with a fully connected layer, region cropping and scaling operations on the input image are required. Scaling and cropping processes will result in the loss of picture feature information, lowering detection accuracy and affecting detection outcomes; however, scaling and cropping processes will result in the loss of picture feature information, lowering detection accuracy and affecting detection results, whereas SPPNet can overcome the limitation of the fixed size of the input image, saving the computational cost [21]. Deep convolutional layers with wide sensory fields are appropriate for predicting large targets, whereas shallow convolutional layers with small sensory fields are suitable for predicting small targets. The properties of the two layers are combined by lateral connection. As a result, YOLOv3 is capable of predicting objects of varying sizes and is ideal for a variety of sophisticated application scenarios. Spatial Pyramidal Pooling Structure The spatial pyramid pooling (SPP) structure [20] solves the problem of repeated extraction of image features by convolutional neural networks and greatly improves the detection efficiency; the SPPNet network structure is shown in Figure 4. To ensure that the resolution of the input image matches the feature dimension of the fully connected layer in a neural network with a fully connected layer, region cropping and scaling operations on the input image are required. Scaling and cropping processes will result in the loss of picture feature information, lowering detection accuracy and affecting detection outcomes; however, scaling and cropping processes will result in the loss of picture feature information, lowering detection accuracy and affecting detection results, whereas SPPNet can overcome the limitation of the fixed size of the input image, saving the computational cost [21]. Improved YOLOv3 Network Structure The basic feature extraction network is commonly downsampled five times, with a downsampling rate of 2, and the multiplicity of five times downsampling is 32 to the fifth power of two, according to the COCO dataset description. If downsampling is continued, the feature map obtained will be one, and the target information will be lost. Small targets are fewer than 32 × 32 pixels, medium targets are 32 × 32-96 × 96 pixels, and giant targets are greater than 96 × 96 pixels [22]. As illustrated in Figure 5, the TT100K traffic sign dataset used in this work was mostly made up of small and medium targets, with large targets accounting for just 7.4% of the total dataset and tiny targets accounting for 42.5% [23]. The TT100K dataset has a high resolution, with each image having a resolution of 2048 × 2048 pixels and the largest traffic signs among the small targets accounting for less than 0.1% of the entire image, posing a significant challenge to the target detection algorithm. Small targets have limited features and necessitate great localization precision. Despite the introduction of the FPN structure in YOLOv3 to leverage multi-scale feature fusion to produce predictions by fusing the findings of distinct feature layers, which is critical for small target identification, the results were still unsatisfactory. In the YOLOv3 network, the shallow layer contains less feature semantic information but a precise target location, whereas the deep layer has more but a coarse target location. As a result, shallow convolutional layers are used to predict small targets, and deep convolutional layers are used to predict large targets. A fourth feature prediction scale of size 152 × 152 was added to the three feature prediction scales of the YOLOv3 network structure in order to fully utilize the shallow features in the network to anticipate small targets. With an input image size of 608 × 608, the output image feature size was 152 × 152 after convolution and a two-fold upsampling of the input image, and the Improved YOLOv3 Network Structure The basic feature extraction network is commonly downsampled five times, with a downsampling rate of 2, and the multiplicity of five times downsampling is 32 to the fifth power of two, according to the COCO dataset description. If downsampling is continued, the feature map obtained will be one, and the target information will be lost. Small targets are fewer than 32 × 32 pixels, medium targets are 32 × 32-96 × 96 pixels, and giant targets are greater than 96 × 96 pixels [22]. As illustrated in Figure 5, the TT100K traffic sign dataset used in this work was mostly made up of small and medium targets, with large targets accounting for just 7.4% of the total dataset and tiny targets accounting for 42.5% [23]. Improved YOLOv3 Network Structure The basic feature extraction network is commonly downsampled five times, with a downsampling rate of 2, and the multiplicity of five times downsampling is 32 to the fifth power of two, according to the COCO dataset description. If downsampling is continued, the feature map obtained will be one, and the target information will be lost. Small targets are fewer than 32 × 32 pixels, medium targets are 32 × 32-96 × 96 pixels, and giant targets are greater than 96 × 96 pixels [22]. As illustrated in Figure 5, the TT100K traffic sign dataset used in this work was mostly made up of small and medium targets, with large targets accounting for just 7.4% of the total dataset and tiny targets accounting for 42.5% [23]. The TT100K dataset has a high resolution, with each image having a resolution of 2048 × 2048 pixels and the largest traffic signs among the small targets accounting for less than 0.1% of the entire image, posing a significant challenge to the target detection algorithm. Small targets have limited features and necessitate great localization precision. Despite the introduction of the FPN structure in YOLOv3 to leverage multi-scale feature fusion to produce predictions by fusing the findings of distinct feature layers, which is critical for small target identification, the results were still unsatisfactory. In the YOLOv3 network, the shallow layer contains less feature semantic information but a precise target location, whereas the deep layer has more but a coarse target location. As a result, shallow convolutional layers are used to predict small targets, and deep convolutional layers are used to predict large targets. A fourth feature prediction scale of size 152 × 152 was added to the three feature prediction scales of the YOLOv3 network structure in order to fully utilize the shallow features in the network to anticipate small targets. With an input image size of 608 × 608, the output image feature size was 152 × 152 after convolution and a two-fold upsampling of the input image, and the The TT100K dataset has a high resolution, with each image having a resolution of 2048 × 2048 pixels and the largest traffic signs among the small targets accounting for less than 0.1% of the entire image, posing a significant challenge to the target detection algorithm. Small targets have limited features and necessitate great localization precision. Despite the introduction of the FPN structure in YOLOv3 to leverage multi-scale feature fusion to produce predictions by fusing the findings of distinct feature layers, which is critical for small target identification, the results were still unsatisfactory. In the YOLOv3 network, the shallow layer contains less feature semantic information but a precise target location, whereas the deep layer has more but a coarse target location. As a result, shallow convolutional layers are used to predict small targets, and deep convolutional layers are used to predict large targets. A fourth feature prediction scale of size 152 × 152 was added to the three feature prediction scales of the YOLOv3 network structure in order to fully utilize the shallow features in the network to anticipate small targets. With an input image size of 608 × 608, the output image feature size was 152 × 152 after convolution and a two-fold upsampling of the input image, and the feature layer was induced through the route layer; this feature extraction was fused with the 11th layer feature to increase the fourth feature prediction scale. In addition, the SPP module was added to realize the merging of local and global features by borrowing the notion of SPPNet and combining it with YOLOv3. Before the YOLO detection layer, the SPP module was integrated between the fifth and sixth convolutional layers, and the SPP module's feature maps and feature maps pooled were reconnected and passed to the next detection network layer. To accomplish the feature map level fusion of local and global features, the SPP module's maximum pooling kernel should be as close to the size of the feature map to be pooled as possible. To minimize the computational effort caused by the SPP module, enrich the feature map expression capability, and increase the detection impact, the SPP module in this research was composed of two parallel branches, each of which was composed of a 19 × 19 max pooling layer and a jump connection. Figure 6 depicts the improved YOLOv3 network structure. feature layer was induced through the route layer; this feature extraction was fused with the 11th layer feature to increase the fourth feature prediction scale. In addition, the SPP module was added to realize the merging of local and global features by borrowing the notion of SPPNet and combining it with YOLOv3. Before the YOLO detection layer, the SPP module was integrated between the fifth and sixth convolutional layers, and the SPP module's feature maps and feature maps pooled were reconnected and passed to the next detection network layer. To accomplish the feature map level fusion of local and global features, the SPP module's maximum pooling kernel should be as close to the size of the feature map to be pooled as possible. To minimize the computational effort caused by the SPP module, enrich the feature map expression capability, and increase the detection impact, the SPP module in this research was composed of two parallel branches, each of which was composed of a 19 × 19 max pooling layer and a jump connection. Figure 6 depicts the improved YOLOv3 network structure. Improved Loss Function The loss function of YOLOv3 is composed of the center coordinate loss (lossxy), width-height coordinate loss (losswh), confidence loss (lossconf), and classification loss (losscls). The central coordinate loss is represented by: loss of width and height coordinates is represented by: confidence loss is represented by: and category loss is represented by: Improved Loss Function The loss function of YOLOv3 is composed of the center coordinate loss (loss xy ), widthheight coordinate loss (loss wh ), confidence loss (loss conf ), and classification loss (loss cls ). The central coordinate loss is represented by: loss of width and height coordinates is represented by: confidence loss is represented by: and category loss is represented by: (5), where the mean square error (MSE) loss function is used for the bounding box regression and cross entropy is utilized as the loss function in loss conf and loss cls . loss = loss xy + loss wh − loss con f − loss cls (5) However, utilizing MSE as the bounding box regression's loss function is unfavorable to small target detection, sensitive to object scale, and focuses on big-scale targets while being unfriendly to small-scale objects. To balance the loss of large and small targets and maximize the detection results by weakening the influence of the bounding box size on the width and height loss function, the IoU-type loss function was employed in this paper, and the metric loss generated by IoU was used as a performance Equation (6). When the bounding box and the target box do not overlap, IoU = 0 does not reflect the distance gap between the two boxes; when the prediction box and the labeled box completely overlap, IoU = 1, the bounding box's center point cannot be determined, and the size gap with the target box cannot be further optimized. DIoU loss [24] is independent of size; thus, big sizes will not result in a large loss. Due to the fact that a tiny size produces a little loss, which can address the problem, this work used the DIoU loss, whose calculation formula is presented in Equation (7). where b and b gt denote the central points, ρ is the Euclidean distance, and c is the diagonal length of the smallest enclosing box covering the two boxes. DIoU loss minimizes the distance between two target frames directly, converges quickly, and is more in line with the target frame regression mechanism, which takes into account the distance between the target and anchor, the overlap rate, and the scale, making target frame regression more stable, while still providing the gradient direction for the bounding box when it does not overlap with the target frame. Dataset and Evaluation Indicators There are a few big, publicly available traffic sign datasets, the majority of which use the GTSDB, but the GTSDB is not the same as Chinese traffic signs. CTSDB, CCTSDB, and TT100K, among others, are Chinese traffic sign datasets. The CCTSDB was expanded on the basis of CTSDB, and its categories were divided into warning signs, directional signs, and prohibition signs, without detailed classification of traffic signs. The TT100K traffic sign collection was created in collaboration between Tencent and Tsinghua University. It offered thorough categorization and identification of traffic signs, covered various climatic and lighting circumstances, and was more accurate for actual driving situations. Therefore, TT100K traffic sign dataset was used in this paper, and some of the traffic signs and the category information are shown in Figure 8. The TT100K dataset has 100,000 photos with a resolution of 2048 × 2048 pixels, although there are unlabeled traffic sign images, and some categories have only a few images or duplicate images, reducing the detection effect. Therefore, this paper removed the unlabeled and duplicate traffic sign images from the dataset and selected 45 categories with a high number of traffic signs, where the 45 traffic sign categories were: pn, pne, i5, pl1, pl40, po, pl50, pl80, io, pl60, p26, i4, pll00, pl30, il60, pl5, i2, w57, p5, p10, ip, Dataset and Evaluation Indicators There are a few big, publicly available traffic sign datasets, the majority of which use the GTSDB, but the GTSDB is not the same as Chinese traffic signs. CTSDB, CCTSDB, and TT100K, among others, are Chinese traffic sign datasets. The CCTSDB was expanded on the basis of CTSDB, and its categories were divided into warning signs, directional signs, and prohibition signs, without detailed classification of traffic signs. The TT100K traffic sign collection was created in collaboration between Tencent and Tsinghua University. It offered thorough categorization and identification of traffic signs, covered various climatic and lighting circumstances, and was more accurate for actual driving situations. Therefore, TT100K traffic sign dataset was used in this paper, and some of the traffic signs and the category information are shown in Figure 8. scales and twelve anchors: (4,5), (5, 6), (7, 7), (7,13), (8,8), (9,10), (11,12), (13,14), (16,17), (20,22), (27,29), and (41, 44). Dataset and Evaluation Indicators There are a few big, publicly available traffic sign datasets, the majority of which use the GTSDB, but the GTSDB is not the same as Chinese traffic signs. CTSDB, CCTSDB, and TT100K, among others, are Chinese traffic sign datasets. The CCTSDB was expanded on the basis of CTSDB, and its categories were divided into warning signs, directional signs, and prohibition signs, without detailed classification of traffic signs. The TT100K traffic sign collection was created in collaboration between Tencent and Tsinghua University. It offered thorough categorization and identification of traffic signs, covered various climatic and lighting circumstances, and was more accurate for actual driving situations. Therefore, TT100K traffic sign dataset was used in this paper, and some of the traffic signs and the category information are shown in Figure 8. The TT100K dataset has 100,000 photos with a resolution of 2048 × 2048 pixels, although there are unlabeled traffic sign images, and some categories have only a few images or duplicate images, reducing the detection effect. Therefore, this paper removed the unlabeled and duplicate traffic sign images from the dataset and selected 45 categories with a high number of traffic signs, where the 45 traffic sign categories were: pn, pne, i5, pl1, pl40, po, pl50, pl80, io, pl60, p26, i4, pll00, pl30, il60, pl5, i2, w57, p5, p10, ip, The TT100K dataset has 100,000 photos with a resolution of 2048 × 2048 pixels, although there are unlabeled traffic sign images, and some categories have only a few images or duplicate images, reducing the detection effect. Therefore, this paper removed the unlabeled and duplicate traffic sign images from the dataset and selected 45 categories with a high number of traffic signs, where the 45 traffic sign categories were: pn, pne, i5, pl1, pl40, po, pl50, pl80, io, pl60, p26, i4, pll00, pl30, il60, pl5, i2, w57, p5, p10, ip, pl120, il80, p23, pr40, ph4. 5, w59, p12, p3, w55. pm20, pl20, pg, pl70, pm55, il100, p27, w13, p19, ph4, ph5, wo, p6, pm30, and w32, and the number of each traffic sign category is shown in Figure 9. pl120, il80, p23, pr40, ph4. 5, w59, p12, p3, w55. pm20, pl20, pg, pl70, pm55, il100, p27, w13, p19, ph4, ph5, wo, p6, pm30, and w32, and the number of each traffic sign category is shown in Figure 9. Figure 9 shows that even if 45 categories with a large number of traffic signs were chosen, there was still a significant imbalance in the amount of data between each category, resulting in poor model prediction accuracy. As a result, as illustrated in Figure 10, this work balanced and expanded the dataset by employing tactics such as color dithering, Gaussian noise, and image rotation to ensure that the amount of each category was as equal as feasible. The Mosaic approach reads four images at a time, scales and alters the color gamut of each image, arranges them in four directions, and then stitches the images together to create the target's true frame. The enhancement method stitches four images, which is equivalent to calculating the parameters of four images with one input. This can reduce the number of images for batch input, reduce the training difficulty and training cost, improve the training speed, and largely enrich the number of samples in the dataset, which is conducive to the learning of features by the model. In this paper, the evaluation metrics of the COCO dataset, including mAPIoU = 0.50, APS, APM, APL, and several other metrics, were used to evaluate the performance of the Figure 9 shows that even if 45 categories with a large number of traffic signs were chosen, there was still a significant imbalance in the amount of data between each category, resulting in poor model prediction accuracy. As a result, as illustrated in Figure 10, this work balanced and expanded the dataset by employing tactics such as color dithering, Gaussian noise, and image rotation to ensure that the amount of each category was as equal as feasible. Sensors 2022, 22, x FOR PEER REVIEW 9 of 15 pl120, il80, p23, pr40, ph4. 5, w59, p12, p3, w55. pm20, pl20, pg, pl70, pm55, il100, p27, w13, p19, ph4, ph5, wo, p6, pm30, and w32, and the number of each traffic sign category is shown in Figure 9. Figure 9 shows that even if 45 categories with a large number of traffic signs were chosen, there was still a significant imbalance in the amount of data between each category, resulting in poor model prediction accuracy. As a result, as illustrated in Figure 10, this work balanced and expanded the dataset by employing tactics such as color dithering, Gaussian noise, and image rotation to ensure that the amount of each category was as equal as feasible. The Mosaic approach reads four images at a time, scales and alters the color gamut of each image, arranges them in four directions, and then stitches the images together to create the target's true frame. The enhancement method stitches four images, which is equivalent to calculating the parameters of four images with one input. This can reduce the number of images for batch input, reduce the training difficulty and training cost, improve the training speed, and largely enrich the number of samples in the dataset, which is conducive to the learning of features by the model. In this paper, the evaluation metrics of the COCO dataset, including mAPIoU = 0.50, APS, APM, APL, and several other metrics, were used to evaluate the performance of the The Mosaic approach reads four images at a time, scales and alters the color gamut of each image, arranges them in four directions, and then stitches the images together to create the target's true frame. The enhancement method stitches four images, which is equivalent to calculating the parameters of four images with one input. This can reduce the number of images for batch input, reduce the training difficulty and training cost, improve the training speed, and largely enrich the number of samples in the dataset, which is conducive to the learning of features by the model. In this paper, the evaluation metrics of the COCO dataset, including mAP IoU = 0.50 , AP S , AP M , AP L , and several other metrics, were used to evaluate the performance of the model. In particular, most of the traffic signs in the TT100K traffic sign dataset belonged to small targets, so special attention needed to be paid to the detection accuracy of small targets. The specific meanings of the evaluation metrics are as follows: AP: The area below the P-R curve, where P-R is precision and recall, respectively. mAP IoU = 0.50 : When the IoU threshold is set to 0.50, it is the average of all categories of AP in the dataset, which is the evaluation index of the PASCAL VOC dataset and corresponds to AP IoU = 0.50 in the COCO evaluation index. AP S : average value of mAP for small objects: area < 322, and IoU = range (0.5, 1.00, 0.05) for a total of 10 IoUs. AP L : average value of mAP for large objects: area > 962, and IoU = range (0.5, 1.00, 0.05) for a total of 10 IoUs. Improved YOLOv3 Comparison Experiment Three YOLOv3 networks with enhanced methods were compared and tested in this study, utilizing the TT100K traffic sign dataset and input images that were 608 × 608 pixels in size. Figure 11 displays the mAP and AR of M-YOLOv3 trained on the TT100 dataset. The detection results for various sizes of targets are shown in Figure 12 and Table 1. Among them, YOLOv3-DK adopted the strategy of improving the loss function DIoU loss and the re-clustering anchor; YOLOv3-SPP adopted the fusion space strategy of the pyramid pooling structure; YOLOv3-4l adopted the strategy of adding the fourth prediction feature layer with 152 × 152 scales; and M-YOLOv3 was the YOLOv3 network structure using all the improved strategies. model. In particular, most of the traffic signs in the TT100K traffic sign dataset belonged to small targets, so special attention needed to be paid to the detection accuracy of small targets. The specific meanings of the evaluation metrics are as follows: AP: The area below the P-R curve, where P-R is precision and recall, respectively. mAPIoU = 0.50: When the IoU threshold is set to 0.50, it is the average of all categories of AP in the dataset, which is the evaluation index of the PASCAL VOC dataset and corresponds to APIoU = 0.50 in the COCO evaluation index. Improved YOLOv3 Comparison Experiment Three YOLOv3 networks with enhanced methods were compared and tested in this study, utilizing the TT100K traffic sign dataset and input images that were 608 × 608 pixels in size. Figure 11 displays the mAP and AR of M-YOLOv3 trained on the TT100 dataset. The detection results for various sizes of targets are shown in Figure 12 and Table 1. Among them, YOLOv3-DK adopted the strategy of improving the loss function DIoU loss and the re-clustering anchor; YOLOv3-SPP adopted the fusion space strategy of the pyramid pooling structure; YOLOv3-4l adopted the strategy of adding the fourth prediction feature layer with 152 × 152 scales; and M-YOLOv3 was the YOLOv3 network structure using all the improved strategies. Figure 12 show that the average mean accuracy of the original YOLOv3 without employing any strategies was 68.9%, whereas the mAP of the upgraded YOLOv3 with all methods was 77.3%, an improvement of 8.4% in detection. The DIoU loss function and re-clustering anchor technique enhanced detection accuracy by 1.3%; however, the improvement was due to faster loss function convergence during training, which made the target box regression more stable and improved the recall rate. More pronounced improvements in mAP were seen in YOLOv3, which included an SPP structure and achieved a 73.2%. The SPP structure combined local and global characteristics, Table 1 and Figure 12 show that the average mean accuracy of the original YOLOv3 without employing any strategies was 68.9%, whereas the mAP of the upgraded YOLOv3 with all methods was 77.3%, an improvement of 8.4% in detection. The DIoU loss function and re-clustering anchor technique enhanced detection accuracy by 1.3%; however, the improvement was due to faster loss function convergence during training, which made the target box regression more stable and improved the recall rate. More pronounced improvements in mAP were seen in YOLOv3, which included an SPP structure and achieved a 73.2%. The SPP structure combined local and global characteristics, enhancing the feature map's ability to express itself and significantly increasing detection accuracy. Using the method of adding a fourth prediction feature layer with 152 × 152 scales, the mAP was also considerably improved. The accuracy of tiny-target detection was enhanced by 10.5% when compared to YOLOv3, which made full use of the shallow features in the network for small-target prediction, resulting in a considerably improved detection effect, but at the cost of increased network complexity and processing. The best improvement was M-YOLOv3, which combined the three improvement procedures and achieved a mAP of 77.3%, which is 8.4% higher than the original YOLOv3 s average mean accuracy. Figure 13 depicts the test results of M-YOLOv3 on TT100K. Table 2. Comparison of the Improved YOLOv3 Algorithm with Other Algorithms M-YOLOv3 was compared with several other classical target detection algorithms to further validate the detection recognition of the improved network, and the results are shown in Table 2. Table 2 demonstrates that M-YOLOv3 had the highest mAP of 77.3%, and SSD had the best real-time performance, with an FPS of 42. Compared with the original YOLOv3 algorithm, the average precision mean was greatly improved, although the real-time performance was reduced. Compared with the one-stage algorithm SSD, mAP improved by 12%, but there was still a gap in real-time performance. Compared with the two-stage target detection algorithm Faster-RCNN, the FPS was improved to 22, and the mAP was also improved by 1.7%, which improved the detection speed, as well as the detection accuracy. The trials showed that M-YOLOv3 performed better in terms of detection accuracy and speed. Improved Recognition Effect of YOLOv3 on Traffic Signs in a Special Environment Due to various factors, such as strong light irradiation, nighttime, and special environments of traffic sign occlusion, that will affect traffic sign detection and recognition in real-world driving scenarios, it was also necessary to consider the model's recognition effect on traffic signs in special environments. In particular circumstances, the upgraded YOLOv3 model was employed to recognize traffic signs, as demonstrated in Figure 13. In Figure 14, the detection effect of YOLOv3 is compared with that of M-YOLOv3 in a special environment. As shown in Figure 14(b1,c1), the YOLOv3 algorithm failed to detect the obscured traffic sign in the case of an obscured traffic sign, while the improved YOLOv3 algorithm accurately identified the obscured traffic sign; as shown in Figure 14(b2,c2), the YOLOv3 algorithm had problems of false detection and missed detection for traffic sign recognition under the environment of strong light irradiation, while the improved YOLOv3 algorithm recognized all the traffic signs accurately. The improved YOLOv3 algorithm increased the fourth feature prediction scale for small targets, improving the detection effect of small targets, whereas the YOLOv3 algorithm had issues with missed detection and low confidence for small targets, as shown in Figure 14(b3,c3); in dimly illuminated environments, such as at night, the upgraded YOLOv3 algorithm recognized traffic signs, as illustrated in Figure 14(b4,c4); however the YOLOv3 method did not detect targets. As a result, under particular situations, the updated YOLOv3 algorithm still yielded better detection results. proving the detection effect of small targets, whereas the YOLOv3 algorithm had issues with missed detection and low confidence for small targets, as shown in Figure 14(b3,c3); in dimly illuminated environments, such as at night, the upgraded YOLOv3 algorithm recognized traffic signs, as illustrated in Figure 14(b4,c4); however the YOLOv3 method did not detect targets. As a result, under particular situations, the updated YOLOv3 algorithm still yielded better detection results. Conclusions A traffic sign detection and recognition network based on the modified YOLOv3 was suggested in this research, with the goal of addressing the difficulties of small targets being difficult to detect and low detection accuracy in traffic sign detection and identification tasks. The new spatial pyramidal pooling structure enabled the fusion of local and global features in this study, as well as increased the fourth feature prediction scale for small targets to improve the detection effect of small targets. To make the target frame regression more stable, the DIoU loss was utilized, which had a faster convergence and was more consistent with target frame regression. The detection network's accuracy was considerably improved by damaging the real-time network as little as possible. The mAP increased by 8.4 points. The upgraded YOLOv3 algorithm enhanced the network's complexity and lowered the detection speed. However, real-time detection is still a long way off; therefore, boosting detection speed to accomplish the effect of real-time detection will be the next research area.
v3-fos-license
2024-03-01T06:18:24.729Z
2024-02-28T00:00:00.000
268058275
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://animalmicrobiome.biomedcentral.com/counter/pdf/10.1186/s42523-024-00299-3", "pdf_hash": "dba7883a60ab7a907d9c21e35c636e9aadbd15d4", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:133", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "sha1": "1527790d0637dcafbcaf3b514ef90469e38b9ed7", "year": 2024 }
pes2o/s2orc
Virome characterization of diarrheic red-crowned crane (G. japonensis) Background The red-crowned crane is one of the vulnerable bird species. Although the captive population has markedly increased over the last decade, infectious diseases can lead to the death of young red-crowned cranes while few virological studies have been conducted. Methods Using a viral metagenomics approach, we analyzed the virome of tissues of the dead captive red-crowned crane with diarrhea symptoms in Dongying Biosphere Reserve, Shandong Province, China and feces of individual birds breeding at the corresponding captive breeding center, which were pooled separately. Results There is much more DNA and RNA viruses in the feces than that of the tissues. RNA virus belonging to the families Picornaviridae, and DNA viruses belonging to the families Parvoviridae, associated with enteric diseases were detected in the tissues and feces. Genomes of the picornavirus, genomovirus, and parvovirus identified in the study were fully characterized, which further suggested that infectious viruses of these families were possibly presented in the diseased red-crowned crane. Conclusion RNA virus belonging to the families Picornaviridae, and DNA viruses belonging to the families Genomoviridae and Parvoviridae were possibly the causative agent for diarrhea of red-crowned crane. This study has expanded our understanding of the virome of red-crowned crane and provides a baseline for elucidating the etiology for diarrhea of the birds. Introduction Red-crowned crane (Grus japonensis) is a species of large, wading, omnivorous bird in the family Gruidae (cranes).The red-crowned crane is one of the National Class I Key Wildlife Protection of China's National List of Key Wildlife Protection and listed as a vulnerable bird species by the International Union for Conservation of Nature (IUCN 2021).The current population in the wild globally is estimated to be 2800-3430 individuals [1].Several efforts, including the creation of biosphere reserves and captive breeding programs, have been made to maintain number of communities by reintroducing captive-bred cranes into the wild.Although the captive population has markedly increased over the last decade [2], infectious diseases can lead to the death of young red-crowned cranes while few virological studies have been conducted [3]. Recently, our understanding of the virosphere has been revolutionized bymetagenomics [4].Metagenomic nextgeneration sequencing (mNGS), independent of culture, has massively fostered the rate of virus discovery by identifying divergent viruses that could not be detected using traditional approaches.In addition, these technical advances also enable the assembly of the viral genome quickly.Importantly, the diversity and abundance of viruses in individuals or environmental samples can be achieved using NGS-based approaches followed by intensive bioinformatics analyses.Using a viral metagenomics approach, virome of feces from wild and captive red-crowned cranes has been investigated and identified a large numbers of vertebrate, plant, insect, and aquatic animal viruses [5]. The current study was designed to detect and characterize the virome presented in captive breeding birds in Dongying Biosphere Reserve in Shandong Province, which is located at the Chinese eastern migration area for bird migration.Specifically, we wanted to explore the possible viral etiology for the diarrheal death of the captive red-crowned cranes breeding in the biosphere reserve. Sample collection and preparation In the spring of 2022, some 3-month-old red-crowned cranes, raised in Dongying Biosphere Reserve, Shandong Province, China, showed symptoms of acute watery diarrhea, and one died within a day without any bleeding and tissue lesions in autopsy examination.Tissue sample containing stomach, the upper part of the intestines and liver of the dead red-crowned crane was collected immediately and stored on dry ice prior to shipping to the laboratory (Fig. 1).Feces sample was a pool of thirty fresh feces from individual clinically healthy birds of different ages breeding at the captive breeding center, and the feces of wild birds were not excluded.About 2 g of each tissues were collected respectively, cut into small pieces and pooled together, inverted mixed gently and then incubated in 1 ml digestion buffer (1 ml DPBS, 1 mg/ml collagenase I, 12 U DNase I, pH 7.5 ) at 37 °C for 1 h.Then, the digested samples were centrifugated at 5000 × g for 10 min to remove debris, cells and bacteria.After that, the supernatants were passed through 0.22 μm filters (Sartorius).The filtrates were treated with 200U Benzonase (Millipore), Turbo DNase I (ThermoFisher Scientific) and 0.1 mg/ ml RNase A (Sangon Biotech) followed by heat inactivation of DNases at 65 °C for 10 min.Fecal sample (~ 200 mg) was suspended in 1.2 mL saline-magnesium buffer, and vortexed for 10 min followed by centrifugation at 5000 × g for 20 min.The clarified suspensions were filtered through 0.22 μm filters (Sartorius).The filtrates were Fig. 1 Graphical scheme and flow chart of experiment design for virus discovery.Tissues and feces samples were collected from Dongying Biosphere Reserve, Shandong Province, China in spring, 2022.The samples were pooled separately and subjected for the following host/bacteria removal and virus collection, DNA/RNA extraction, viriome sequencing and final bioinformatics analysis treated with 200U Benzonase (Millipore) and 0.1 mg/ml RNase A (Sangon Biotech) followed by heat inactivation of DNases at 65 °C for 10 min. Virome sequencing Virome sequencing was performed in Chengdu Life Baseline Technology Co., Ltd.Viral RNA and DNA extraction was performed with QIAgen MinElute Virus Spin Kit according to the manufacture's recommendation.The DNA concentration was quantified using Qubit with a Equalbit 1× dsDNA HS Assay Kit (Vazyme Biotech Co., Ltd), and the RNA concentration was determined using Qubit with a Equalbit RNA HS Assay Kit (Vazyme Biotech Co., Ltd).The virome library was constructed using a sequence independent Amplification (SIA) method [6].Briefly, the SIA was facilitated on the total nucleic acid using 200U superscript III reverse transcriptase with a random primer (5′-GAC CAT CTA GCG ACC TCC ACNNNNNNNN-3′) followed by the second strand synthesis with 2.5 U 3′-5′ exo-Klenow DNA polymerase (New England Biolabs).Then, the double strand product was PCR amplified with a specific primer (5′-GAC CAT CTA GCG ACC TCC AC-3′).The purified amplification product was fragmented, ends repaired, dA-tailed, adaptor ligated and amplified with VAHTS ® Universal Plus DNA Library Prep Kit for Illumina (Vazyme Biotech Co., Ltd).The library was quality-checked by an Agilent4200 Bioanalyzer and sequencing was performed on the Illumina Nova Seq 6000platform with 2 × 150 bp pair-end reads. Data analyses Fastp was used to remove the low-quality reads or reads containing a high proportion of N and cut the adapters to obtain clean reads after the sequencing [7].The rRNA sequences and host contamination in fecal sample was removed by Bowtie2 [8] with a collection of genomic sequences including Anser fabalis, Fulica atra, Ciconia boyciana, Ardea cinerea, Aix galericulata, Egretta garzetta, Grus japonensis, Anser cygnoides, Anas platyrhynchos.For tissue sample, the genome of Grus japonensis was used.MEGAHIT [9] was used to the de novo assembly to obtain the contigs (-min-contig-len 500).Open reading frames (ORFs) was then predicted by Prodigal [10].Viral contigs were identified by database comparison (NR database and RdRp database) and virus identification software (Virsorter2 [11], Deep virfinder [12], and VIBRANT [13]) simultaneously, and the union set were used for subsequent analysis.BBSketch and hmmsearch was used to remove false positive sequences of non-viral species, such as genome sequences of bacteria, fungi, and protozoa.High-Similarity (HS) annotation viral contigs with the nucleic acid similarity and protein similarity more than 85% were considered to be a high confidence virus sequence, which will be used for further virus sequence assembly. Phylogenetic analyses for identified viruses To generate phylogenetic trees, the assembled genome sequence of each virus was subjected to a BLASTN search against NCBI's GenBank (http:// blast.ncbi.nlm.nih.gov/ Blast.cgi).We retrieved the top homologous viral genome sequences and the representative strains of different species from corresponding family for individual virus.Nucleotide sequences were aligned by ClustalW program [14].Phylogenetic trees were constructed by MEGA6 program [15] using the neighbor-joining method with a bootstrap of 1000 replicates as previously reported [16]. Overview of metagenomic data and virome composition In our study, 66,814,680 clean reads were obtained in feces sample and 67,924,496 clean reads were obtained in tissues sample, among them 135,636 (0.20%) and 79,818 (0.12%) were viral reads.A total of 10,899 and 685 viral contings were identified in feces and tissues sample respectively (Table 1), indicating there is much more viruses in the feces than that of the tissues.For feces sample, 22 viral contings were defined as HS annotation RNA viruses and 28 viral contings were defined as HS annotation DNA viruses.For tissues sample, only 5 viral contings were defined as HS annotation RNA viruses and 1 viral contings were defined as HS annotation DNA viruses.In addition, 1118 and 104 novel viruses were predicated in feces and tissues sample respectively. Genome architectures and evolution of picornavirus Picornaviruses were found in both samples.The family Picornaviridae currently consists of 158 species grouped into 68 genera (as of July 2022).In this study, a library of the tissues containing 3977 sequence reads related to Picornaviridae sp. could be assembled into a nearly complete genome of picornavirus (9707 nt) with 89.13% identity and 99% coverage with Megrivirus HN56 (Gen-Bank no.KY369300).The picornavirus strain was named SDDY2022 and included a 491-bp 5′ UTR, a 8919-bp polyprotein ORF encoding putative polyprotein of 2972 aa and a 297-bp partial 3′ UTR (Fig. 3A).Phylogenetic analysis based on the complete genomes of representative strains from the family Picornaviridae showed that the SDDY2022 strain was clustered with two goose Megriviruses (W18 and HN56) causing enteritis, duck megrivirus strain LY, and a cluster of Megrivirus A strains identified from Cygnus olor in United Kingdom, all of them were belonging to the Megrivirus A branch that infecting birds (Fig. 3B).Based on sequence alignment with the closet HN56 strain, P1, P2, and P3 polypeptides that showing 97.1%, 95.7%, and 98.8% homology were identified in the polyprotein.The P1 polypeptide (892aa) was assumed to be cleaved at VP0/VP3 ( 378 E/A), VP3/VP1 ( 629 Q/G) to produce three capsid proteins.The P2 polypeptide (1196aa) and P3 polypeptide (884aa) was assumed to be cleaved at 2A1/2A2 ( 1037 E/A), 2A2/2A3 ( 1331 E/A), 2A3/2B ( 1548 Q/A), 2B/2C ( 1743 E/A), 2 C/3A ( 2088 Q/A), 3A/3B ( 2267 E/A), 3B/3C ( 2296 E/G), and 3 C/3D ( 2498 E/A) to produce nine nonstructural proteins.The SDDY2022 and HN56 strains possessed the same cleavage sites as above, except that the cleavage site between P1 polypeptide and P2 polypeptide was 892 Q/S, while it was 892 Q/N in HN56 strain.Together these data suggest that the tissues sample contains Megrivirus A with typical picornavirus structure.In addition, three partial sequences in tissues and feces were also identified respectively. Genome architectures and evolution of genomovirus A large number of genomovirus sequence reads were detected in the feces samples.The family Genomoviridae currently consists of 237 species grouped into 10 genera (as of July 2022).Three complete genome sequences of genomovirus, named GkrogV/Gj, GgorV/ Gj, and GkoloV/Gj, were assembled from feces.GkrogV/ Gj was 98.40% identity to Genomoviridae sp.strain Gen-120 (GenBank no.OK491637) with 100% coverage, GgorV/Gj was 99.91% identity to Genomoviridae sp.isolate 190Gen-2 (GenBank no.OM892309) with 100% coverage, and GkoloV/Gj was 99.91% identity to Momordica charantia associated gemycircularvirus isolate Br1 (GenBank no.NC_075310) with 93% coverage.Phylogenetic analysis based on the complete genomes of the representative strains from the family Genomoviridae showed that they were closely related to the genus of Gemykrogvirus, Gemygorvirus, and Gemykolovirus respectively (Fig. 4A).The complete genome sizes were 2127, 2208, and 2221nt in length, all of which encoded two major proteins: capsid protein (Cp) and replicationassociated protein (Rep), including a putative intron (Fig. 4B).Non-canonical stem-loop structures were identified in the genome of genomoviruses.In addition, seven Phylogenetic tree was constructed using the MEGA6 program using the Neighbouring method with a bootstrap of 1000 replicates.Virus identified in this study is denoted with a black filled triangle partial sequences associated finch, Raccoon dog, murine, Giant panda were also identified in feces, while only a partial sequence belonging to Genomoviridae sp. was identified in tissues. Genome architectures and evolution of parvovirus In addition to genomovirus, a large number of parvovirus sequence reads were also detected in the feces samples.Viruses within the family Parvoviridae are currently grouped into three phylogenetically defined subfamilies and an undefined genus (Metalloincertoparvovirus): Parvovirinae (11 genera), which includes thus far only viruses infecting vertebrates; Densovirinae (11 genera), which comprises viruses infecting invertebrates; and Hamaparvovirinae, a recently established taxon that contains viruses identified in both vertebrate (2 genera) and invertebrate (3 genera) host(as of July2022).A complete genome sequence of parvovirus, named ChapV/g1/Gj, was acquired from 22,984 reads.The assembled genome of the parvovirus was 5601nt in length, which was 83.96% identity to Parvoviridae sp.isolate yc-10 (GenBank no.NC_075277) with 78% coverage, and contain four putative ORFs encoding non-structural proteins NS1, NS2, NS3 (P15 accessory protein) at the left end, and a viral capsid protein VP1 at the right end (Fig. 5A).Phylogenetic analysis based on the complete sequence of NS1 (100% coverage) from representative strains of the family Parvoviridae showed that the identified parvovirus in the present study, as well as Parvoviridae sp.isolate yc-10, belongs to the species Chaphamaparvovirus galliform1 of genus Chaphamaparvovirus of subfamily Hamaparvovirinae (Fig. 5B).In addition, six partial sequences belonging to subfamily Densovirinae associated wild and zoo birds including Blattella germanica, Parus major, Junonia coenia, and one partial sequence belonging to subfamily Parvovirinae associated bat were also identified in feces. Discussion Enteric disease is an ongoing problem worldwide.Gastrointestinal symptoms are caused by several factors, such as infecting viruses.Wild birds may harbor a large number of pathogens that may cause diseases in animals or humans [17,18].Many enteric viruses have been identified in birds with the application of mNGS method independent of traditional culture [19].Multiple coinfections promote and facilitate the recombination and evolution of viruses and eventually could contribute to the severity of the diarrhea.Herein, we employed mNGS method to characterize the virome of the tissue of a young dead red-crowned crane with diarrhea symptoms and a pool of fresh feces from individual birds breeding Fig. 4 Phylogenetic analysis and genomic features of the identified three genomoviruses.A Phylogenetic relationships of the identified genomoviruses with other viruses of ten genera of the family Genomoviridae based on the complete genomes.Phylogenetic tree was constructed using the MEGA6 program using the Neighbouring method with a bootstrap of 1000 replicates.Viruses identified in this study are denoted with a black filled triangle.B Genomic organization indicating ORFs and stem-loop structures of the identified genomoviruses at the corresponding captive breeding center.Consistent with previous studies, the feces of red-crowned crane contains abundant both DNA and RNA viruses [5].Collectively, two major families of DNA virus groups from Genomoviridae and Parvoviridae were identified in the feces.Except for Megrivirus A of family Picornaviridae infecting animals, numerous RNA viruses believed to infect insects, plants, and crustaceans were dominated in the feces, which have also been found in the aquatic environment, feces of human and various animals, and probably derived from the diet of red-crowned cranes [5].Parvoviridae virus and Megrivirus A were the dominate DNA and RNA virus found in tissues of the diseased red-crowned crane, both of which were also detected in feces.Therefore, virus species from families Picornaviridae, Genomoviridae and Parvoviridae maybe the causative agents trigger diarrhea symptoms of the redcrowned crane.Moreover, a nearly complete genome Phylogenetic tree was constructed using the MEGA6 program using the Neighbouring method with a bootstrap of 1000 replicates.Virus identified in this study is denoted with a black filled triangle of picornavirus, three complete genome sequences of genomovirus, and a complete genome sequence of parvovirus were assembled.It further suggested that infectious viruses of families Picornaviridae, Genomoviridae and Parvoviridae were highly possibly presented in the diseased red-crowned crane. Members of picornaviruses contains diverse non-enveloped, positive ssRNA viruses with a genome of 7-9 kb in length,which may cause various diseases in different vertebrate hosts, including enteric diseases [20,21].Of particular note is the discovery of picornaviruses both in the tissues and feces samples in this study, and a nearly complete genome of picornavirus was identified in the tissues.The identified picornavirus was clustered with two goose Megriviruses (W18 and HN56) causing enteritis [22], both of which were belonging to the Megrivirus A branch that infecting birds.W18 and HN56 was identified from goose flocks with approximately 20% death in 15-to 30-day-old geese, and suspected as a potential recombinant virus, with a distinct P1 region possibly originating from an unknown picornavirus [22].P1, P2, and P3 polypeptides of the SDDY2022 strain in the study were highly homology with HN56, and the two viruses shared nearly all the same cleavage sites except one in the P2 polypeptide.The results demonstrated that the SDDY2022 strain may be came from a common ancestor virus with HN56.Our results and previous report on HN56 together verified that the Megrivirus A could be an important enteric virus, which may be the major causative agent trigger diarrhea symptoms in the red-crowned crane in our study.In addition, these viruses also have a close relationship with duck megrivirus strain LY that was highly prevalent in duck populations [23], and a cluster of Megrivirus A strains identified from Cygnus olor in United Kingdom.These findings suggest that Megrivirus A infections were geographically widespread around the world, and wild waterbirds may play an important role in the spread and recombination of picornaviruses. The Genomoviridae family, which includes viruses with small, circular ssDNA genomes(~ 2-2.4 kb), encodes two proteins-Cp and Rep-and an intergenic region [24].In the present study, the feces sample is rich in Genomoviridae sp., while these viruses were restricted in the reads but dominant in the tissues sample.A large number of uncultivated Genomoviridae viruses have been found in association with a great variety of environmental, plant, and animal samples with the advent of metagenomics approaches [25].However, no direct implication with a disease has been demonstrated so far.Previously, complete genome sequences of gemycircularvirus have been assembled from both wild and breeding red-crowned cranes [5].Herein, a total of three complete genome sequences closely related to genus Gemykrogvirus, Gemygorvirus, and Gemykolovirus were also assembled based on the reads of feces sample.However, no complete genome sequences of Genomoviridae sp. could be identified in the tissues.It is therefore possible that the Genomoviridae viruses were derived from the diet and might cause a local infection. Parvoviruses are icosahedral, non-enveloped viruses with ssDNA genomes of about 5 kb in size, with prevalence in deep sequencing results of livestock showing diarrhea [26,27].The identified parvovirus with complete genome sequence in the present study belongs to the species Galliform chaphamaparvovirus 1 of genus Chaphamaparvovirus of subfamily Hamaparvovirinae [28].Galliform chaphamaparvovirus 1 includes a single virus, turkey parvovirus 2, which was detected in the feces of domestic turkeys with high prevalence [29].The present study expended the host spectrum of Galliform chaphamaparvovirus 1, and also supported that the host of the species limited to birds.Increasing evidences have shown that Chaphamaparvoviruses were localized in the gastrointestinal system and could play a potential role as an enteric pathogen associated with diarrhea [26,27].In addition, six partial sequences belonging to subfamily Densovirinae associated wild and zoo birds, and one partial sequence belonging to subfamily Parvovirinae associated bat were also identified in feces.Parvovirinae and Densovirinae were classic subfamilies of Parvoviridae defined in 1993 that infect vertebrate and invertebrate animals, respectively.The results showed that the wild birds in Dongying Biosphere Reserve carry a large number of parvovirus that infecting both birds and mammals, which may promote to the death of the young red-crowned crane.Given the migratory nature of wild birds and wide host range the parvovirus, undoubtedly that the viruses excreted in the feces also play a vital role in virus transmission in the ecological environment. Of particular note is that the dead red-crowned crane with diarrhea symptoms was collected in Dongying Biosphere Reserve, Shandong Province, which is located at the Chinese eastern migration area for bird migration, containing the East Asia-Australasia migration route and the middle of the Western Pacific migration route.Birds migrate from south to north in the spring.Red-crowned crane is one of the representatives of migratory bird species in the area [30].Therefore, the viruses in the feces excreted into the environment probably have important ecological impacts.The virome of tissues and feces of the dead captive red-crowned crane with diarrhea symptoms therefore provides clues for comparison to those of other birds or following diarrheal disease outbreaks nearby or in other migratory areas. Conclusion Taken together, although the etiology for diarrhea of the dead red-crowned crane remains to be clarified, we have identified virus species from families Picornaviridae and Parvoviridae associated with enteric diseases in the tissues of the red-crowned crane in Dongying Biosphere Reserve, Shandong Province, China and feces of individual birds breeding at the corresponding captive breeding center.In particular, the complete genome sequence of picornavirus, genomovirus, and parvovirus further suggested that infectious viruses of these families were possibly presented in the diseased red-crowned crane.Our results enriched our understanding of the virome of birds and provide a baseline for elucidating the causative agent for diarrhea of the birds or following virological disease outbreaks. Fig. 2 Fig. 2 Overview of the virus composition.Relative abundance of RNA viruses (A) and DNA viruses (B), novel viral reads (C, D) in the feces and tissues respectively based on the viral contig numbers Fig. 3 Fig. 3 Genomic organization and phylogenetic analysis of the Megrivirus A of Picornaviridae.A Genomic organization of the Megrivirus A strain indicating open reading frames (ORFs).The predicated polypeptides were shown below the gene box and the the predicted cleavage sites are shown above the gene box.B Phylogenetic relationships of Megrivirus A with other viruses in the Picornaviridae family based on the complete genomes.Phylogenetic tree was constructed using the MEGA6 program using the Neighbouring method with a bootstrap of 1000 replicates.Virus identified in this study is denoted with a black filled triangle Fig. 5 Fig. 5 Genomic organization and phylogenetic analysis of the parvovirus.A Genomic organization of the identified parvovirus indicating ORFs.B Phylogenetic relationships of parvovirus with representative strains of the family Parvoviridae based on the complete sequence of NS1 gene.Phylogenetic tree was constructed using the MEGA6 program using the Neighbouring method with a bootstrap of 1000 replicates.Virus identified in this study is denoted with a black filled triangle Table 1 Summary of the metagenomic dataORFs Open reading frames, HS High-similarity, PHG Phage
v3-fos-license
2024-01-26T16:35:10.711Z
2024-01-22T00:00:00.000
267219129
{ "extfieldsofstudy": [], "oa_license": "CCBYSA", "oa_status": "HYBRID", "oa_url": "https://jist.publikasiindonesia.id/index.php/jist/article/download/868/1544", "pdf_hash": "b39ba8108d7e5f9935000167a0f0524b3a502a57", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:135", "s2fieldsofstudy": [ "Business", "Economics" ], "sha1": "841c7ab0faed379da148e3e22a27095590dda20d", "year": 2024 }
pes2o/s2orc
ANALYSIS OF VALUE AT RISK MEASUREMENT USING THE VARIANCE-COVARIANCE METHOD IN THE SECURITIES PORTFOLIO ABSTRACT Introduction Bonds are one of the investment instruments of Fixed Income Securities (SPT) issued by the government and companies as debt securities.Investors get returns in the form of coupons every certain period and can get capital gains when selling these bonds (Herdiyan, Septiawati, Faturahman, & Djuanda, 2023).Every financial investment instrument that provides a rate of return (return) will have risks.Bonds have various risks that accompany them, including interest rate, Call, Default, Liquidity, and Volatility (Tu & Chen, 2018).These risks are caused by uncertainty factors surrounding these bonds' process and structure.Therefore, investors should always view that trading a bond always has risks. Choosing bonds with the smallest loss rate is one of the priorities of an investor.Investors can determine the value of Value at Risk (VaR) in advance to determine the maximum loss obtained when investing (Adibrata, Hartati, & Asih, 2021).The VaR estimation process is integral to the risk management framework applied to banks and non-bank companies (Deni Sunaryo, 2021).The estimation process requires several precise risk calculations and modelling techniques to produce the best estimate that determines the magnitude of VaR for a certain period and the confidence level of a specific interval (Anam, Di Asih, & Kartikasari, 2020).In companies/institutions in the form of banks, the VaR calculation results can be used to determine how much minimum capital banks need to meet Basel regulations. One application of VaR measurement is the variant-covariant method known as the Delta Normal Method.The variance-covariance method was chosen because it produces a lower estimate of the potential future volatility of an asset or portfolio compared to Monte-Carlo simulation methods and historical simulations (Aritonang & Nasution, 2023).The lower the estimated volatility, the lower the level of risk.Low volatility estimates are caused because the variance-covariance method assumes that returns are typically distributed, and portfolio returns are linear to the return of a single asset (Rachmatin, 2015). Bank Jatim is one of the BPDs that has been conducting treasury transactions for quite a long time to increase the company's fee-based income (Thariq, 2020).One of the transactions carried out is the sale and purchase transaction of securities, which will later be separated into 3 (three) portfolio groups, namely Amortised Cost (AC), Fair Value Through Other Comprehensive Income (FVTOCI), and Fair Value Through Profit and Loss (FVTPL).In AC portfolios, securities cannot be traded and can only be liquidated at maturity.In FVTOCI portfolios, securities can be traded if there is a need for liquidity, while in FVTPL portfolios, securities must be traded in specific time intervals (Nainggolan, Juliana, & Alantina, 2020).Because the bonds purchased by Bank Jatim in the FVTOCI portfolio can be a choice of assets that are relatively liquid when there is a need for liquidity, Bank Jatim must consider the potential risk and return on each bond so that an optimal portfolio can be formed, both in terms of return and risk (Maf'ula, Handayani, & Zahroh, 2018).One that can be used to measure the level of risk in investment is the calculation of Value at Risk (VaR). Based on the formulation of the problem above, the objectives of this study are as follows: 1. Analyze the difference in maximum potential loss on each bond in PT East Java Regional Development Bank, Tbk with Value at Risk (VaR) calculation.2. Shows a significant difference in the Value at Risk VaR calculation in a diversified portfolio compared to a single asset. Types of Research The type of research used is quantitative research with content analysis.Research is carried out systematically to examine risk estimates in investments in the form of bonds by collecting data that will be measured using statistics. Operational Definition and Variable Measurement In this study, the operational definition of the variables is as follows: 1. Value at Risk (VaR) Value at Risk (VaR) is an attempt to quantify the maximum potential loss that may occur in an asset or portfolio position with a certain probability over a certain period. The variable measurement used in this study is a ratio, where the final result to be obtained is a Value at Risk (VaR) value (Rahmadhani, Zulbahridar, & Hariadi, 2016).The smaller the percentage of VaR, the smaller the estimated loss received.The VaR value will be used to determine the level of risk and estimated loss of each bond or portfolio. Data Types and Sources This study used primary and secondary data types.The type of secondary data to be used is bond yield data that has been determined as a sample.The data is taken from the IBPA website, namely www.phei.co.id.The data is from February 20 to May 19, 2023 (55 working days). Population and Research Sample The population of this study is bonds in Banka Jatim's FVTOCI portfolio.The bonds sampled in this study were selected based on specific criteria (purposive sampling): government bonds with a remaining duration of more than 8 (eight) years and have fixed return characteristics.The bonds used as research objects are FR68, FR80, FR96 and FR98. Data Collection Methods The data collection method used in this study so that the data obtained is relevant so that it can be used as a foundation in the analysis process; then, the data collection technique is a documentation method by collecting, studying, and analysing primary and secondary data.The documentation method used in this study is to take predetermined bond market price data to be sampled. Data Analysis Techniques In this study, the data analysis technique used is Parametric Statistics, where assumptions are made during research.An example is the assumption that the data used is usually distributed.This assumption is used because this study uses the Variant-Covariance method, where one of the criteria is the assumption that the data used is usually distributed. In the analysis with the Varian-Covarian method, the calculation of the return value of bond prices and the normality test of market return data using Kolmogorov-Smirnov.The normality test of portfolio return data with Kolmogorov-Smirnov and the correlation value of the bond portfolio have conditions of -1 ≤ correlation < +1.Next, measurement of single asset VaR value and portfolio VaR value and verification test of VaR results using Log-likelihood Ratio Test. Hypothesis Testing Criteria The criteria for testing both hypotheses will be based on the VaR value generated by each bond and portfolio.If FR80 bonds have the smallest VaR value compared to other bonds, then FR80 bonds are concluded to have the least estimated risk so that the first hypothesis is proven.Furthermore, suppose the bond portfolio produces a smaller VaR value than each bond.In that case, it is concluded that the bond portfolio has a more negligible estimated risk than bonds that are a single asset, so the second hypothesis is proven. Results and Discussion VaR value on a single asset's valuable letter 1.Data Normality Test with Kolmogorov Smirnov Test Method Value at Risk (VaR) values can be calculated using the Varian-Covariance method when the data is usually distributed.In this study, the data used yielded data from each security within 55 working days (Wijayanti & Diyanti, 2017).The effect of profit volatility, profit smoothing and corporate governance on the profit quality of Islamic and conventional banks.Therefore, data will first be tested for normality using the Kolmogorov-Smirnov Test Method, and the following hypothesis is formed: H_0= Normally distributed securities yield data H_1= Abnormally distributed securities yield data Kriria ujiif calf then accessed, and if cap cap cap cap capapcap 0 ℎ > then rrejectedcapted 0 . With a significance level of 0.05 and n (number of data) = 55, Kolmogorov Smirnov's critical test table determined D tabel = 0.180.The results of data processing using Mc.Excel obtained the following results: From the data above, all maximum numbers D hitung each bond has a smaller value than D tabel, so they H_0 accepted.So, the yield data for each bond above is data with a normal distribution. Results and Analysis of VaR value measurement VaR is a quantitative measurement of risk that estimates the maximum potential loss that may occur in the future that an investor will face if holding a portfolio at a specific holding period and confidence level, assuming that market conditions are expected.The formula used in calculating VaR for a single asset is: (): The z value for the normal distribution at the 95% level is 1.645; : volatility or standard deviation of return of an asset (according to the table); : The market value of an asset/investment value is Rp 1.000.000.000,00;: The holding period is 250 (assuming the number of working days in 1 year). With calculations using the formula above, the results of the VaR value are obtained as follows: Table 2 shows that bonds with the code FR0080 have the smallest VaR value compared to other bonds, IDR 34,210,227.00 per year or 3.42% of the total investment value.The VaR value can be interpreted that at a % confidence level of 95%, FR0080 bonds have a maximum potential loss of IDR 34,210,227.00 or 3.42% of the total investment value for the following year.Bonds with the code FR0096 have the most considerable VaR value compared to other bonds, amounting to Rp 49,816,305.00 per year or 4.98% of the total investment value.The VaR value can be interpreted that at a % confidence level of 95%, FR0096 bonds have a maximum potential loss of IDR 49,816,305.00 or 4.98% of the total investment value for the following year. VaR value in the securities portfolio Covariance is a term that indicates how much change two independent variables have changed together.A positive covariance means that the asset is moving in the same direction, while if the covariance is negative, it means the asset is moving in the opposite direction (Santoso, 2018).The selection of securities for portfolio formation can use covariate value as one of the indicators.Optimising returns by minimising risk is done by including assets in the portfolio with negative covariance.Here are the covariance values of each bond pair: Table 3 The covariance value of each bond combination It can be seen that all covariance values are positive, meaning that each bond has a positive relationship, and returns move in the same direction.However, because there is no negative covariance value, the portfolio formation uses a combination of two assets so that six portfolios are formed, namely FR0068 and FR0080,FR0068 and FR0096,FR0068 and FR0098,FR0080 and FR0096,FR0080 and FR0098,and FR0098 bond portfolios.The difference from the single-asset VaR calculation is that portfolio VaR uses portfolio volatility (σp).For a portfolio consisting of 2 assets, σp can be obtained by the formula from Jorion (2007). with: 1 dan 2: Weight of the first asset and the second asset in the portfolio; 1 dan 2: Variants of the return of the first asset and the second asset; 12: Correlation between Return of First Asset and Second Asset. The correlation value is between -1 and 1; if the value is 1, then the two assets have a full correlation relationship.Whereas if it is worth zero, the two assets are not related.So, the portfolio VaR can be found with the following equation: With: 1 dan 2: Weight of the first asset and the second asset in the portfolio; 1 dan 2: VaR the first asset and VaR the second asset; 12: Correlation of Return of First Asset and Second Asset. The VaR of a portfolio depends mainly on the weight or amount of exposure of the bonds contained in a portfolio and on the VaR that has been generated in measurement.With the basic assumption that in this study, the weight/proportion of investment is averaged to simplify calculations, the amount of portfolio VaR can be calculated according to the correlation between each bond in a portfolio expressed by the correlation coefficient.The results of calculating the portfolio's correlation value and Value at Risk value are shown in the table below.4 shows that the correlation value between bonds is almost close to 1, which means a positive linear relationship.The table also concludes that portfolio 5, consisting of FR0080 and FR0098 bonds, has the smallest VaR value compared to other portfolios, amounting to Rp. 36,208,105.00 or 3.62% of the total investment value.This can be interpreted that at a confidence level of 95%, a portfolio with a combination of FR0080 and FR0098 bonds has a maximum potential loss of IDR 36,208,105.00 or 3.62% of the total investment value for the following year. Portfolio 2, consisting of FR0068 and FR0096 bonds, has the largest VaR value compared to other portfolios, which is Rp.45,050,949.00or 4.51% of the total investment value.This can be interpreted that at a confidence level of 95%, a portfolio with a combination of FR0068 and FR0096 bonds has a maximum potential loss of IDR 45,050,949.00 or 4.51% of the total investment value for the following year. The VaR value of the portfolio shows a lower yield than the VaR of a single asset.The lower value indicates a diversification effect.Diversification can occur due to the mutually compressing effect between bonds.If one asset suffers a loss while another asset experiences a profit, then the profit of the other asset can be used to cover the loss of other assets.So, it can be said that investing by forming a portfolio can reduce the value of risk in investing. Conclusion The best bond to invest in is FR0080 because it has a smaller risk value of IDR 34,210,227.00 or 3.42% of the total investment value at a confidence level of 95% for the next one-year period.The best portfolios to invest in are the FR0080 and FR0098 bond portfolios because they have a smaller risk value of IDR 36,208,105.00 or 3.62% of the total investment value at a confidence level of 95% for the next one-year period. A bond portfolio's VaR is lower than individual bonds' VaR.This is due to the diversification effect, where there is a compressing effect between bonds so that it can reduce the value of risk.The diversification effect will be of more excellent value if the correlation between bonds is lower.corporate governance terhadap kualitas laba bank syariah dan konvensional.Muhammadiyah University Yogyakarta.
v3-fos-license
2019-07-17T21:06:10.894Z
2019-05-28T00:00:00.000
197243657
{ "extfieldsofstudy": [ "Chemistry" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.intechopen.com/citation-pdf-url/67348", "pdf_hash": "c3f7120d07dee0efe43d9c9aabf1e5a01d8f5bf5", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:136", "s2fieldsofstudy": [ "Materials Science" ], "sha1": "e8f37289b57d690745d344dea26fc6ad2b6fe3bb", "year": 2019 }
pes2o/s2orc
Green Corrosion Inhibitory Potentials of Cassava Plant ( Manihot esculenta Crantz) Extract Nanoparticles (CPENPs) in Coatings for Oil and Gas Pipeline Internal and external corrosion affects oil and gas pipelines and were discussed in this chapter. Corrosion inhibitors are one of the methods that can be used to achieve corrosion control and prevention. The main discussion in this chapter was the use of cassava plant ( Manihot esculenta Crantz) extract nanoparticles (CPENPs) as an additive in coatings to serve as a green corrosion inhibitor for oil and gas pipeline. Trace elements, such as O, Si, Ca, K, Fe and S, which are hetero-atoms, have been identified in CPENPs. Elements like Si and Ca would also improve the strength of coatings as well as reduce corrosion rate of coated metals. It has also been revealed that CPENPs is composed of the following compounds SiO 2 , CaCO 3 , Ca 2 (SO 4 ) 2 H 2 O and CaC 2 O 4 (H 2 O), which would help in improving the mechanical properties of alloys, composites and coatings. SiO 2 if added to coatings will improve the coating hardness, while the presence of CaCO 3 in coatings will form a precipitate that will serve as a protective film on the surface of the metal, thereby protecting the metal from corrosion. The nature of bond and organic compounds that exist in the CPENPs was also discussed. Introduction Corrosion is the main problem affecting pipelines in the oil and gas industry. Internal corrosion in oil and gas pipelines is primarily caused by the presence of water together with acid gases or sulphate reducing bacteria. It can be categorized into three: sweet corrosion, sour corrosion and microbiological influenced corrosion. Conversely, in external corrosion the medium in the surrounding reacts with the outer side of metal pipelines thereby causing certain damages. The soil is complex three-phased system, which makes it a conductor to the metal pipelines. The application of coating and inhibitors has help to solve the corrosion problem. Coatings may be applied alone or may be used with other common methods such coatings for oil and gas pipeline. The cassava extract to be considered here are the bark, leaf and stem, which is usually dumped all over the environment as waste. Cassava extract can be processed into nanoparticles and added into coatings as a green corrosion inhibitor. Corrosion problems in oil and gas pipeline Corrosion is the main problem affecting oil and gas pipelines. Understanding the electro-chemical nature of corrosion was a major breakthrough, as shown in Figure 1, and this made it possible for corrosion to be mitigated, if electric current sufficient to offset the inherent corrosion current of a particular environment were caused to flow in the opposite direction. The applied direct current was termed "cathodic" protection because it made the pipe the cathode in a galvanic cell [10,11]. The required current could be supplied by connecting a "sacrificial" anode (i.e., a metal with a higher oxidation potential than iron) in an electrical circuit where soil acted as the "electrolyte." Alternatively, commercial current could be directed to the pipe via an anode bed. The application of coating and inhibitors also help to solve the corrosion problem [10,11]. Internal corrosion in pipeline Internal corrosion in oil and gas pipelines is primarily caused by the presence of water together with acid gases (carbon dioxide or hydrogen sulphide) or sulphate reducing bacteria [12]. It can be divided into three broad categories: • Sweet corrosion • Sour corrosion • Microbiological influenced corrosion (MIC) External corrosion in pipeline The medium in the surrounding reacts with the outer side of metal pipelines chemically, electrochemically and physically causing certain damages. These damages are called external corrosion of pipelines. The soil is complex three-phased Schematic of anodic site and cathodic site as they lead to corrosion [9]. system, which makes it a conductor to the metal pipelines. Plus, the oxygen concentration cell caused by the oxygen concentration difference accelerates the pipeline corrosion [13]. Control and prevention of corrosion in the oil and gas industry Control and prevention of corrosion in the oil and gas industries has been on for a long time. Although many methods have been suggested to arrest corrosion, they can be classed broadly into four main categories; selection of appropriate materials, use of inhibitors, use of protective coatings and cathodic protection [13]. Corrosion inhibitors An inhibitor is a substance that, when added in small concentrations to an environment, decreases the corrosion rate. In a sense, an inhibitor can be considered as a retarding catalyst. There are numerous inhibitor types and compositions and are generally classified into organic and inorganic inhibitors (Figure 2). Most inhibitors have been developed by empirical experimentation, and many inhibitors are proprietary in nature and thus their composition is not disclosed. The chemicals, their concentration, and the frequency of injection depend on the process medium and, normally, on the recommendations of the inhibitor manufacturer, since these chemicals, although generic in nature, are generally proprietary items [10,13,15]. The inhibitors used are normally chromates, phosphates, and silicates, added following the recommendations of the manufacturer. The removal of oxygen from a fluid medium improves the chances of corrosion resistance by materials in contact with the fluid. Controlling and stabilizing the pH value of the medium is another method of combating corrosion Inhibition is not completely understood because of these reasons, but it is possible to classify inhibitors according to their mechanism and composition. The corrosion rates of usefully resistant materials generally range between 1 and 200 mpy [10,13,15]. Inorganic corrosion inhibitors Substances, such as arsenic and antimony ions, specially retard the hydrogenevolution reaction. As a consequence, these substances are very effective in acid solutions but are ineffective in environments where other reduction processes such as oxygen reduction are the controlling cathodic reactions [15]. Scavengers are substances that act by removing corrosive reagents from solution. Oxidizers are substances as chromate, nitrate, and ferric salts also act as inhibitors in many systems. In general, they are primarily used to inhibit the corrosion of metals and alloys that demonstrate active-passive transitions, such as iron and its alloys and stainless steels [15]. Vapor-phase inhibitors are very similar to the organic adsorption-type inhibitors and possess a very high vapor pressure. Generally, the inorganic inhibitors have cathodic actions or anodic. Anodic inhibitors Anodic inhibitors (also called passivation inhibitors) act by a reducing anodic reaction, that is, blocks the anode reaction and supports the natural reaction of passivation metal surface, also, due to the forming a film adsorbed on the metal. In general, the inhibitors react with the corrosion product, initially formed, resulting in a cohesive and insoluble film on the metal surface [14,16,17]. The anodic inhibitors reacts with metallic ions Me n+ produced on the anode, forming generally, insoluble hydroxides which are deposited on the metal surface as insoluble film and impermeable to metallic ion, from the hydrolysis of inhibitors results in OH − ions [16]. Some examples of anodic inorganic inhibitors are nitrates, molybdates, sodium chromates, phosphates, hydroxides and silicates [14,15]. Potentiostatic polarization diagram showing electrochemical behavior of a metal in a solution with anodic inhibitor (a) versus without inhibitor (b) is illustrated in Figure 3. Cathodic inhibitors As the corrosion process begins, the cathodic corrosion inhibitors prevent the occurrence of the cathodic reaction of the metal. These inhibitors have metal ions able to produce a cathodic reaction due to alkalinity, thus producing insoluble compounds that precipitate selectively on cathodic sites. Deposit over the metal a compact and adherent film, restricting the diffusion of reducible species in these areas. Thereby, increasing the impedance of the surface and the diffusion restriction of the reducible species, in this case, the oxygen diffusion and electrons conductive in these areas. These inhibitors cause high cathodic inhibition [14,[16][17][18]. Figure 4 shows an example of a polarization curve of the metal on the solution with a cathodic inhibitor. When the cathodic reaction is affected the corrosion, potential [14]. is shifted to more negative values [15]. When cathodic inhibitors minimize the release of hydrogen ions due to a phenomenon that can difficult the discharge of the hydrogen, called overvoltage [14]. Organic corrosion inhibitors Environmental concerns require corrosion inhibitors to be nontoxic and environment friendly and acceptable. Green chemistry serves as a source of environmental friendly green corrosion inhibitors. Corrosion inhibitors are extensively used in corrosion protection of metals and equipment. Organic compounds with functional groups containing nitrogen, sulfur, and oxygen atoms are generally used as corrosion inhibitors. Most of these organic compounds are not only expensive but also harmful to the environment. Thus, efforts have been directed toward the development of cost effective and nontoxic corrosion inhibitors. Plant products and some other sources of organic compounds are rich sources of environmentally acceptable corrosion inhibitors. An example of such a system is the corrosion inhibition of carbon steel by caffeine in the presence and absence of zinc. Plant products are a source of environment-friendly green inhibitors such as phthalocyanines [2]. After the addition of the inhibitor, the corrosion potential remains the same, but the current decreases from I cor to I' cor . Is showed in Figure 5 the mechanism of actuation of organic inhibitors, when it is adsorbed to the metal surface and forms a protector film on it. The inhibitor efficiency could be measured by the follow equation: where E f is inhibitor efficiency (percentage), R i is corrosion rate of metal with inhibitor and R o is corrosion rate of metal without inhibitor [14]. Corrosion inhibitor mechanism Corrosion inhibition mechanisms operating in an acid medium differs widely from one operating in a near-neutral medium. Corrosion inhibition in acid solutions can be achieved by halides, carbon monoxide, and organic compounds containing functional group heteroatoms such as nitrogen, phosphorus, arsenic, oxygen, sulfur, and selenium, organic compounds with multiple bonds, proteins, polysaccharides, [14]. glue, bitumen, and natural plant products such as chlorophyll and anthocyanins [19]. The initial step in the corrosion inhibition of metals in acid solutions consists of adsorption of the inhibitor on the oxide-free metal surface followed by retardation of the cathodic and/or the anodic electrochemical corrosion reactions [19]. Corrosion inhibitors work by forming a protective film on the metal preventing corrosive elements contacting the metal surfaces, as illustrated in Figure 6. The action mechanisms of corrosion inhibitors are; • By adsorption, forming a film that is adsorbed onto the metal surface, • By inducing the formation of corrosion products such as iron sulfide, which is a passivizing species, • By changing media characteristics, producing precipitates that can be protective and eliminating or inactivating an aggressive constituent. It is well known that organic molecules inhibit corrosion by adsorption, forming a barrier between the metal (pipeline) and the environment. Thus, the polar group of the molecule is directly attached to metal and the non-polar end is oriented in a vertical direction to the metal surface, which repels corrosive species as illustrated in Figure 6, furthermore establishing a barrier against chemical and electrochemical attack by fluids on the metallic surface [20,21]. [14]. An inhibitor may be effective in one system, while in another it is not, therefore, it is convenient to consider the following factors: chemical structure of the inhibitor component, chemical composition of the corrosive medium, nature of the metal surface (pipeline), operating conditions (temperature, pressure and pH) thermal stability of the inhibitor-corrosion inhibitors have temperature limits above which lose their effectiveness because they suffer degradation of the containing components, solubility of the inhibitor in the system-the solubility of the inhibitor in the system is required to achieve optimum results in the metal surface protection; this depends on the length of the hydrocarbon chain, the addition of surfactants to enhance the dispersibility or solubility of inhibitors, and modification of the molecular structure of the inhibitor by ethoxylation to increase the polarity, and thus reach its solubility in the aqueous medium [21]. The main features of an inhibitor are: • Ability to protect the metal surface. • High activity to be used in small quantities (ppm). • Low cost compounds. • Inert characteristics to avoid altering a process. • Easy handling and storage. • Preferably with low toxicity. • It should act as an emulsifier. • It should act as a foaming agent [21]. Paints for corrosion protection Paints are made up of a mixture of different components, although paints designed for different purposes will have different formulations, they all have some key features in common. Paints contain a pigment to give color, including white; a film former that binds the pigment particles together and binds them to the surface to be painted; a liquid that makes it easier to apply the paint and additives to make the basic paint better to store and to use. There are two main types of paint, which are gloss and emulsion [22]. Table 1 shows a typical gloss paint formulation. Alkyd resin binder The alkyd resins produced this way are referred to as oil-modified alkyd resins and contribute about 70% to the conventional binders used in surface coating [23]. They determine the performance quality of surface coatings such as the rate of drying, gloss, durability of the dry film and resistance of the dry film to abrasion and chemicals. However, classification of alkyd resins is based on the oil length and oil type [24]. The vegetable oils used in oil-modified alkyd resins are usually extracted either by mechanical press or solvent extraction [25]. The natural oil in the oil-modified alkyds reacts with atmospheric oxygen leading to the formation of network of polymers cross-linked through the C=C bond. The oxidative drying of the oil brings about the formation of film that shows improved properties with drying time, hardness or water resistance [26]. The oils used in surface coatings contain linolenic and conjugated acid groups, such oils include linseed, perilla and tung oils and possess pronounced drying abilities [27]. There has been tremendous increase in the demand for alkyd resin production for use in the Nigerian surface coating industry due to the rapid growth of the economy [27]. Red oxide pigment The use of iron oxides as natural pigments has been practiced since earliest times. The iron oxides such as magnetite, hematite, maghemite and goethite are commonly used as pigments for black, red, brown and yellow colors respectively. Predominantly natural red iron oxides are used in primers for steel constructions and cars reducing corrosion problems. Iron oxide are strong absorbers of ultraviolet radiation and mostly used in automotive paints, wood finishes, construction paints, industrial coatings, plastic, nylon, rubber and print ink [28]. Solvent Solvents are not the only means of removing low molecular weight compounds. Heat can help evaporate saturated fatty acids, such as palmitic and stearic acids, and an improperly stored or displayed painting can become embrittled by the loss of these plasticizers. The long-term behavior of oil paints also seems to indicate that a small amount of evaporation of fatty acids occurs over time. Improper temperature on a hot table may do so as well since the volatility of the fatty acids becomes significant above 70-80°C [29]. The mechanical properties of a paint film depend upon its basic structure and the presence of small organic molecules that may act as plasticizers. The original structure of a paint film contains the ester bonds of the oil and the bonds produced by the cross-linking of the unsaturated fatty acids through autoxidation. The loss of any of these bonds results in weakening the film strength. Any loss of the ester bonds must have a significant effect on the structure of the oil paint. After 6 years, the paints made with varying degrees of hydrolyzed oil appear as coherent films, but some disintegrate when solvents, such as acetone or toluene, are applied because these solvents can remove the low molecular weight compounds that contribute to the stability of the paints [29]. Additives Additives are small amounts of substances that modify the paint properties, additives might be driers anti-skin agents, anti-corrosive agents, antifreeze, dispersing aids, wetting agents, thickeners, biocides, low temperature drying aids, anti-foam agent, and coalescing solvent. Driers accelerate the paints drying (hardening) by catalyzing the oxidation of the binder, while plasticisers increase the paints flexibility. Fungicides, biocides and insecticides prevent growth and attack of fungi, bacteria and insects and flow control agents improve flow properties. Defoamers prevent formation of air bubbles entrapped in the coatings, emulsifiers are wetting agents increasing the colloidal stability of the paints in liquid state, while UV stabilizers provide stability of the paints under ultra-violet light, and anti-skinning agents prevent formation of a skin in the can. Adhesion promoters improve the adhesion of the coating to the substrate, and texturizers impart textures to the coatings [29]. Nanoparticles Nanoparticles are important scientific tools that have been and are being explored in various biotechnological, pharmacological and pure technological uses. They are a link between bulk materials and atomic or molecular structures. While bulk materials have constant physical properties regardless of its size, among nanoparticles the size often dictates the physical and chemical properties. Thus, the properties of materials change as their size approaches the nanoscale and as the percentage of atoms at the surface of a material becomes significant. For bulk materials, those larger than 1 μm (or micron), the percentage of atoms at the surface is insignificant in relation to the number of atoms in the bulk of the material. Nanoparticles are unique because of their large surface area and this dominates the contributions made by the small bulk of the material [30,31]. In typical nanomaterials, the majority of the atoms located on the surface of the particles, whereas they are located in the bulk of conventional materials. Thus, the intrinsic properties if nanomaterials are different from conventional materials since the majority of atoms are in a different environment. Nanomaterials represent almost the ultimate in increasing surface area and they are chemically very active because the number of surface molecules or atoms is very large compared with the molecules or atoms in the bulk of the materials. Substances with high surface areas have enhanced physical, chemical, mechanical, optical and magnetic properties and this can be exploited for a variety of structural and non-structural application. Nanoparticles/fillers find application in wear-resistant, erosion-resistant and corrosion resistant [31]. Coatings with nanostructure bring about a reduction in surface contact tension, minimization of moisture penetration, and reduction in surface roughness to 1 nm for better dirt repellence [31]. Synthesis of nanoparticles Nanoparticles of various types has been synthesized; gold, silver, magnetite, zinc oxide, silicon oxide, and others [32][33][34][35], which can be synthesized using methods such as the breakdown (top-down) method and the build-up (bottom-up) method [33]. Plant extracts Plants naturally synthesize chemical compounds in defense against fungi, insects and herbivorous mammals. Some of these compounds or phytochemicals such as alkaloids, terpenoids, flavonoids, polyphenols and glycosides prove beneficial to humans in unique manner for the treatment of several diseases. These compounds are identical in structure and function to conventional drugs. Extracts from parts of plants such as roots, stems, and leaves also contain such extraordinary phytochemicals that are used as pesticides, antimicrobials, drugs and herbal medicines [4,[36][37][38]. Plant extracts as green inhibitors Plant extracts are excessively used as corrosion inhibitors. Plant extracts contain a variety of organic compounds such as alkaloids, flavonoids, tannins, cellulose and polycyclic compounds. The compounds with hetero atoms-N, O, S, P coordinate with (corroding) metal atom or ion consequently forming a protective layer on the metal surface, which prevents corrosion. These serve as cheaper, readily available, renewable and environmentally benign alternatives to costly and hazardous corrosion inhibitors (e.g., chromates). Plant extracts serve as anticorrosion agents to various metals such as mild steel, copper, zinc, tin, nickel, aluminum and its alloys [37,38]. There Cassava plant Cassava can be grown on a wide range and can yield satisfactorily even in acidic soils where most other crops fails [39], the crop has continually played very vital roles, which include income for farmers, low cost food source for both the rural and urban dwellers as well as household food security. In Nigeria, Cassava is generally believed to be cultivated by small scaled farmers with low resources. It also plays a major role in the effort to alleviate the food crisis in Africa [39]. Cassava with botanical name Manihot esculenta, is a woody shrub of the spurge family, Euphorbiaceae, native to South America. It is extensively cultivated as an annual crop in tropical and subtropical regions for its edible starchy tuberous root, a major source of carbohydrates [40]. World production of cassava root was estimated to be 245 million tonnes in 2012 [41]. Africa produces about 137 million tonnes, which is the largest contribution to the world production; 75 million tonnes is produced from Asia; and 33 million tonnes in Latin America and the Caribbean, specifically Jamaica. Nigeria is the world's largest producer of cassava, producing about 37.5 million tonnes annually [41]. A mature cassava root (hereafter referred to as 'root') may range in length from 15 to 100 cm and weigh 0.5-2.5 kg. Circular in crosssection, it is usually fattest at the proximal end and tapers slightly toward the distal portion. It is connected to the stem by a short woody neck and ends in a tail similar to a regular fibrous root. Cassava bark (CB) Cassava bark also known as cassava peels are always dumped in abundant as waste. Although, it has been reported that both leaf and bark contains cyanogenic glucosides, linamarin, lotaustralin, starch, amino acid, carbohydrate, proteins and tannin [41]. Cassava bark (CB) consist of two layers namely; the outer skin and inner skin [42], both layers combine together serves as agro-waste and since annual production is high and it has been reported that the bark consist of 5-10% of the cassava root [42][43][44], the amount of agro-waste that can be generated from cassava bark is significant. CB is used for animal feed [42], biogas [43]. Cassava leaf (CL) Cassava leaves are sometimes considered as agro-waste, though it has other applications such as animal feed [42,45], medicinal application (Aro, 2008) and pack-cyaniding of mild steel [45]. Cassava leaves are also known for their high HCN content, low energy, bulkiness and their high tannin content [42,45]. Cassava leaves are nutritionally valuable products and cassava plant could yield 7-15 tonnes of leaves per hectare, which accounts for an additional 1 tonne of valuable protein and 2.5 tonnes of carbohydrate per hectare [42]. Up to 6% of cassava leaves can be obtained from the total production of cassava [42]. Cassava stem (CS) Cassava stem is the largest waste generated from cassava plantation after harvest, up to 400 bundles can be obtained per hectare (Information and Communication Support for Agricultural Growth in Nigeria, 2015). From the estimated amount of cassava stem waste generated it shows that enough can be obtained for processing into useful application. Cassava stem can be fed to pigs, poultry, dairy cattle [45] and biochar production [46]. Synthesis of cassava plant extract nanoparticles (CPENPs) CPENPs which comprises of cassava bark nanoparticles (CBNPs) [47], cassava leaf nanoparticles (CLNPs) [48] and cassava stem nanoparticles (CSNPs) [48] were obtained by first soaking for 24 h, after which ball milling for 60 h was carried out, to achieve a particles size below 100 nm which were estimated by SEM/Gwyddion software, XRD and TEM [47,48]. Trace elements such as O, Si, Ca, K, Fe and S, were revealed using EDX, which are hetero-atoms and can be added to coatings to help in inhibiting corrosion on metal surfaces [47,48]. Elements like Si and Ca would improve the strength of coatings as well as reduce corrosion rate of coated metals [47,48]. XRD revealed compounds such as SiO 2 , CaCO 3 , Ca 2 (SO 4 ) 2 H 2 O and CaC 2 O 4 (H 2 O), these compounds would help in improving the mechanical properties of alloys or composites and coatings. SiO 2 if added to coatings will improve the coating hardness, while the presence of CaCO 3 in coatings will form a precipitate that will serve as a protective film on the surface of the metal, thereby protecting the metal from corrosion. FTIR result revealed the nature of bond that exist in the CLNPs and GC-MS result showed various organic compounds that were presence in the CLNPs [48]. These organic compounds can be classified as fats, waxes, alkaloids, proteins, phenolics, simple sugars, pectins, mucilages, gums, resins, terpenes, starches, glycosides, saponins and essential oils. All of which helps improve the properties of metallic coatings [47,48]. This chapter discuss the synthesis and characterization of CPENPs which can be used as additives to coatings for corrosion protection, especially coatings for oil and gas applications due to the properties as discussed by Kolawole et al. [46][47][48]. Therefore, CPENPs should not be left to waste as they are useful for additives in coatings. These will add value to the CPENPs that is usually dumped in the environment and also reduces environmental pollution. Conclusions Utilization of cassava plant extract (bark, leaf and stem) nanoparticles as green corrosion inhibitors incorporated into paint or coatings as an additive. The cassava plant part waste utilized will reduce the amount of waste contributing to environmental nuisance. The cassava plant solid extract (bark, leaf and stem) will serve as wealth creation to the farmers and value addition to the cassava waste. The developed corrosion resistant paint will enhance corrosion resistant of API 5 L X65 steel pipeline used in the oil and gas industries. Since the world production of cassava is about 268 million tonnes annually, the cassava waste generated will be significantly high, therefore the developed corrosion resistant paint will be cheaper and efficient because of the presence of heteroatoms and organic compounds which help in inhibiting corrosion.
v3-fos-license
2016-12-30T08:36:52.502Z
2016-06-14T00:00:00.000
3027917
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://bmcevolbiol.biomedcentral.com/track/pdf/10.1186/s12862-016-0697-x", "pdf_hash": "c676ee2e5d656774db68eb19f55fa0392b4b4936", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:139", "s2fieldsofstudy": [ "Biology" ], "sha1": "dff28a20f6c063819ad3b84200ed5cb561db84ed", "year": 2016 }
pes2o/s2orc
Differences in the oxidative balance of dispersing and non-dispersing individuals: an experimental approach in a passerine bird Background Dispersal is often associated with a suite of phenotypic traits that might reduce dispersal costs, but can be energetically costly themselves outside dispersal. Hence, dispersing and philopatric individuals might differ throughout their life cycle in their management of energy production. Because higher energy expenditure can lead to the production of highly reactive oxidative molecules that are deleterious to the organism if left uncontrolled, dispersing and philopatric individuals might differ in their management of oxidative balance. Here, we experimentally increased flight costs during reproduction via a wing load manipulation in female collared flycatchers (Ficedula albicollis) breeding in a patchy population. We measured the effects of the manipulation on plasmatic markers of oxidative balance and reproductive success in dispersing and philopatric females. Results The impact of the wing load manipulation on the oxidative balance differed according to dispersal status. The concentration of reactive oxygen metabolites (ROMs), a marker of pro-oxidant status, was higher in philopatric than dispersing females in the manipulated group only. Differences between dispersing and philopatric individuals also depended on habitat quality, as measured by local breeding density. In low quality habitats, ROMs as well as nestling body mass were higher in philopatric females compared to dispersing ones. Independently of the manipulation or of habitat quality, plasma antioxidant capacity differed according to dispersal status: philopatric females showed higher antioxidant capacity than dispersing ones. Nestlings raised by philopatric females also had a higher fledging success. Conclusions Our results suggest that dispersing individuals maintain a stable oxidative balance when facing challenging environmental conditions, at the cost of lower reproductive success. Conversely, philopatric individuals increase their effort, and thus oxidative costs, in challenging conditions thereby maintaining their reproductive success. Our study sheds light on energetics and oxidative balance as possible processes underlying phenotypic differences between dispersing and philopatric individuals. Electronic supplementary material The online version of this article (doi:10.1186/s12862-016-0697-x) contains supplementary material, which is available to authorized users. Background Dispersal, defined as a movement between the birth site and the first breeding site (natal dispersal) or between two successive breeding sites (breeding dispersal; Greenwood & Harvey [1]), has important ecological and evolutionary consequences both at the individual and the population level [2,3]. In particular, dispersal allows individuals to escape adverse conditions and thereby enhance their fitness. It is also a key driver of gene flow and metapopulation dynamics [4,5]. Individual dispersal propensity often covaries with other behavioural, morphological and physiological traits [6][7][8], a covariation which can have a genetic as well as environmental basis [9,10]. These associations of traits are thought to have evolved because they reduce time and energy costs during the movement phase and/or exploration and competition costs during settlement in the new habitat [11]. Accordingly, dispersing individuals can show morphological adaptations to movement, such as larger wings or fat store [12,13]. They also show behavioural and physiological adaptations to competitive encounters, such as higher aggressiveness [14], and to the exploration of a new habitat, such as higher exploratory behaviour [10,15,16], lower xenophobia [12] or higher immune response [12,17]. Although the association of phenotypic traits with dispersal propensity may be favoured if these traits reduce some of the costs of dispersal (e.g. increasing settlement success in a new habitat patch, reducing the effect of unfamiliarity with the new patch), they may nonetheless entail long-term costs to dispersing individuals in terms of reproductive success or survival prospects, especially when resources are scarce [18]. Indeed, most of the phenotypic traits found to be associated with dispersal (e.g. high aggressiveness, exploration, immunity, or metabolic rate; Clobert et al. [8]) are likely to be energetically demanding [15]. Due to such energetic constraints, dispersing and philopatric individuals may evolve different life-history strategies, with different relative investment in maintenance and reproduction [19,20]. Although metabolic requirements could play an important role in shaping these strategies [21], the physiological constraints that underlie life-history variation in relation to dispersal remain unclear. Among the metabolic processes that could be involved in shaping such life-history variation, the regulation of the oxidative balance is expected to play a particularly important role. Energy production through aerobic metabolism leads to the production of highly unstable oxidative components, called reactive oxygen species or ROS [22,23]. Although ROS are important messengers in central cell signalling pathways such as cell death signals [24,25], they can also damage the structure of biological macromolecules through oxidation and thereby disturb from cell to whole organism functioning, i.e. impose an oxidative stress. If a higher metabolic rate is selected in dispersing individuals compared to philopatric ones to face increased energetic requirements, dispersers could be exposed to a higher production of ROS that could lead to more oxidative damage and reduced life expectancy [20] (but see [26] for a thorough discussion of the links between metabolism and ROS production). Oxidative damages can be prevented through antioxidant defences including inducible enzymes (such as the superoxide dismutase; Balaban et al. [23]) or molecules acquired through the diet (such as vitamin E; Halliwell and Gutteridge [27]). Therefore, dispersing individuals may also regulate a higher production of ROS via an increased investment in antioxidant defences, either internally produced or externally acquired. So far, studies on the links between oxidative balance and personality traits found to be associated with dispersal are inconclusive: higher exploratory behaviour was associated with higher antioxidant defences and lower oxidative damages to lipids in greenfinches [28] whereas no such effect was observed in blue tits [29]. No study has however directly tested for links between dispersal and oxidative balance. Here, we explored whether dispersing and philopatric individuals differ in oxidative balance in a patchy population of a migratory passerine bird, the collared flycatcher Ficedula albicollis. Dispersal was defined as a binary variable, i.e. a change of breeding plot between birth and the first breeding event (natal dispersal) or between two consecutive breeding events (breeding dispersal). In this population, individuals show consistent and heritable differences in dispersal [30,31]. Collared flycatchers migrate each winter to sub-Saharan Africa, whereas dispersal is measured over comparatively small spatial scales (see Methods and Additional file 1: Figure S1), leading to negligible direct physiological costs of dispersal movement between plots from one year to the next. Moreover, exploration and prospection occur before migration, in the previous year, for both breeding adults and juveniles [32], thus the energetics costs of prospection may be expected to be low at the beginning of the breeding season. It follows that differences in oxidative balance according to dispersal are expected to stem out of differences in behaviour, life-history strategy or metabolism between dispersing and philopatric individuals rather than reflect direct physiological costs of prospection and dispersal movement per se. We investigated differences in several markers of energy management and oxidative balance during reproduction, as well as reproductive output, between individuals having or not dispersed between habitat plots. The physiological markers studied included total body mass, fat mass and fat-free mass measured through the doubly-labelled water method [33], primary oxidative damage measured as reactive oxygen metabolites (ROMs) concentration in the plasma [34] and plasma antioxidant capacity estimated through the OXY test [35]. Because metabolic and oxidative balance differences between individuals are more likely to become apparent under constrained energetic conditions, we experimentally manipulated the level of energetic demand by increasing flight costs (through reducing wing area) during reproduction. Such wing load manipulation was successful at increasing energy expenditure in our study population (Additional file 1: Supplementary Information S1). We focused on females because they can easily be manipulated as early as incubation in this species, allowing sufficient time for the manipulation to impact energetic demand and reproductive decisions during nestling rearing (Additional file 1: Figure S2). Such manipulation would increase the reproductive effort necessary to maintain the same reproductive success. Differences in the physiological parameters and/ or reproductive output between dispersing and philopatric females could also arise from differences in habitat quality, either because dispersing and philopatric individuals respond differently to habitat quality or because they settle in habitats of different quality. Therefore, we also controlled statistically in our analyses for natural environmental variation in habitat quality, measured by the local breeding density of conspecifics, which positively relate to reproductive success in this population [36]. If dispersing and philopatric females only are of different intrinsic quality, the lowest quality individuals should show both a stronger decrease in reproductive success and a stronger increase in oxidative stress in response to handicap and/or at low densities ("quality" hypothesis). If however they have different strategies of investment in maintenance and reproduction, individuals maintaining their reproductive success in response to handicap and/or at low densities should show an increase in reproductive effort, and thus oxidative stress. Oxidative costs resulting from reproductive effort should remain limited only at the cost of lower reproductive success under those energetically constrained conditions ("investment strategy" hypothesis). Nestling body mass and fledging success Differences in nestlings' body mass between dispersing and philopatric foster mothers depended on plot density (interaction dispersal status x plot density: F 1,151 = 5.05, P = 0.026; Fig. 3). Chicks raised by dispersing mothers reached a lower body mass than chicks raised by philopatric mothers in low density plots (plot density in first tertile: 0.95 ± 0.37, F 1,43 = 6.54, P = 0.014), whereas there was no significant difference in intermediate (plot density Mean nestling body mass in relation to plot density for dispersing and philopatric foster mothers. Nestling body mass was measured at 12 days of age and corrected for between-year differences. Plot density quantiles were used to define three density classes for the sake of illustration (see Fig. 2) in second tertile: F 1,44 = 0.14, P = 0.72) and high density plots (plot density in third tertile: F 1,53 < 0.00001, P = 0.99). The wing load manipulation had no effect on nestling body mass, either alone or in interaction with the dispersal status of the mother or with plot density (all P > 0.18). Nestling body mass was also lower in 2013 compared to 2012 (−1.82 ± 0.23, F 1,164 = 64.28, P < 0.0001), decreased with increasing brood size (−0.31 ± 0.09, F 1,152 = 10.60, P = 0.001) and increased with the time at weighting (2.57 ± 0.86, F 1,156 = 8.99, P = 0.003). There was no effect of the body mass of the foster mother (F 1,134 = 1.55, P = 0.22) on nestlings' body mass. Link between oxidative balance and reproductive output Nestling body mass increased with the antioxidant capacity of the foster mother (+0.35 ± 0.13, F 1,92 = 7.34, P = 0.008) and decreased with her ROMs concentration (−0.42 ± 0.20, F 1,94 = 4.47, P = 0.04). Nestling fledging probability was independent of the antioxidant capacity of the foster mother (X 2 1 = 0.68, P = 0.41) or her ROMs concentration (X 2 1 = 1.33, P = 0.25). All these effects were independent of the dispersal status of the foster mother, plot density or wing load manipulation (all P > 0.07). Discussion Dispersal is considered an energetically demanding behaviour that may entail costs through increased exposure to oxidative stress. In this study, we experimentally investigated whether dispersing and philopatric individual differ in metabolic markers during reproduction depending on the energetic demand. Only plasma antioxidant capacity was higher in philopatric than dispersing females independently of the experimental increase in wing load and of local breeding density. Differences in ROMs concentration between dispersing and philopatric individuals depended on internal (wing load manipulation) and/or external (plot breeding density) factors. In response to the increase in wing load, metabolic rate increased in both dispersing and philopatric females (Additional file 1: Supplementary Information S1), but ROMs increased in philopatric females only. Similarly, philopatric individuals showed higher ROMs than dispersing ones in low-density plots only. Overall, nestlings raised by dispersing mothers had a lower fledging probability and body mass compared to philopatric mothers, especially in low-density plots. Our results suggest that dispersing and philopatric individuals manage oxidative balance and reproductive investment differently under constrained energetic conditions. Differential management of ROMs in response to experimental energetic constraints The wing load manipulation modified female energy budget, with a higher fat-free mass for manipulated compared to control females. The difference in fat-free mass likely results from an increase in muscular mass, which has a critical influence on flight performance [37] and can increase following wing load manipulations [38]. This would at least partly explain the absence of a decrease in body mass between control and manipulated females, a result previously observed in various passerine species [38][39][40][41][42][43][44]. Interestingly, female wing load manipulation affected neither the physiological parameters of their partners (Additional file 1: Table S1) nor the mass and fledging success of their nestlings. This suggests that manipulated females developed stronger flight muscles allowing them to maintain the same reproductive output as control females, without requiring any noticeable physiological compensation from their partner. Behavioural measures of reproductive investment, such as feeding rates, would help to confirm the absence of compensation by mates of manipulated females. The increase of field metabolic rate in response to wing load manipulation (see Additional file 1: Supplementary Information S1) is expected to come at an oxidative cost. A study in the great tit Parus major however showed no effect of feather clipping on ROMs concentration and antioxidant capacity [45], suggesting that such costs if they exist are not straightforward. The interaction between female dispersal status and manipulation on ROMs concentrations suggests that the oxidative cost of the manipulation might differ between dispersing and philopatric females. Among philopatric females, manipulated females showed higher ROMs concentrations than control ones, whereas there was no difference among dispersing females (Fig. 1). Therefore, dispersing females were able to mitigate the deleterious effect of increased metabolic rate, at least on the short-term, whereas philopatric females were not. As higher ROMs have been related to lower survival and lower reproductive output in other bird species [46][47][48][49], those differences might transfer into longterm fitness costs and thus mediate the trade-off between current and future reproduction. Differential management of ROMs and reproduction in response to habitat quality We used the density of breeders in a plot as a measure of local habitat quality. We found increasing nestling body mass with increasing density. Thus, in general, individuals did not appear to undergo stronger competition in denser plots. On the contrary, denser plots appeared of higher quality in terms of reproduction and may thus be more attractive. This is in line with previous results in this population showing a positive correlation between local breeding density and success at the plot scale, and consequently higher immigration rate [36]. Differences between dispersing and philopatric individuals in ROMs concentration depended on plot density (Fig. 2): philopatric individuals had higher ROMs concentrations than dispersing ones in low-density plots but not in other habitats. This difference was paralleled by the higher body mass of nestlings from philopatric females in low-density plots only, suggesting a trade-offs between exposure to oxidative stress and offspring quality. Our data was however not sufficient to properly test for a within-individual correlation between these two traits, and the overall relationship between them was negative, suggesting that it was mainly driven by differences in individual quality: high quality individual have both low ROMs concentrations and heavy offspring. Overall, the interactions observed between dispersal status and breeding density on measures of oxidative balance and reproductive success suggest that habitat quality plays a key role in shaping oxidative costs during reproduction. Differences in the effect of dispersal measured in habitats of varying quality could result from the multi-causal nature of dispersal and the resulting heterogeneity between dispersing and philopatric individuals. For example, individuals dispersing to low quality habitats might have lower competitive abilities than those dispersing to high quality habitats [50][51][52]. Alternatively, as suggested by the effect of the experimental manipulation, the differences between dispersing and philopatric individuals could reflect different responses to environmental and physiological challenges. However, we cannot fully exclude that high quality individuals settle in high quality habitats. An experimental manipulation of habitat quality, e.g. through food supplementation or parasite infestation, would help to disentangle the role of habitat and individual quality on the management of oxidative costs. An overall difference in antioxidant capacity and reproductive success Philopatric females showed higher plasma antioxidant capacity and higher nestling survival than dispersing ones. Plasma antioxidant capacity has been shown to be correlated with dietary non-enzymatic antioxidants (e.g. vitamins, carotenoids) in humans [53][54][55][56] and birds [57]. Indeed, the OXY-test used here to measure antioxidant capacity, through a reduction of the activity of the hypochlorous acid, mostly reflects the activity of these non-enzymatic antioxidants rather than enzymes targeting specific oxidants such as superoxide, hydrogen peroxide or lipid peroxide. Thus the difference between dispersing and philopatric females in antioxidant capacity and nestling survival supports the idea that philopatric individuals have higher familiarity with their habitat and may be more efficient at finding high quality resources [58]. Alternatively, philopatric and dispersing individuals may be of different quality prior to dispersal. Discriminating these alternatives would require (i) sampling individuals for antioxidant capacity before dispersal and (ii) using translocation experiments to evaluate the benefits of familiarity. Birds often respond to experimental increases in reproductive effort by increasing antioxidant protection to maintain stable oxidative damages [59,60]. Here however, circulating non-enzymatic antioxidants were not increased in response to the wing load manipulation. Some major enzymatic antioxidants, such as catalase and superoxide dismutase, could be alternative low-cost antioxidant mechanisms mobilized when facing an oxidative challenge [61]. Quantifying multiple antioxidants would help determining whether these different antioxidant mechanisms are correlated or on the contrary are traded against each other [62]. It was however not possible here because of the small quantity of plasma available in this small passerine species. Conclusion Overall this study shows that dispersal-related differences in metabolic markers and reproductive success are often condition-or habitat-dependent. Although our results reveal no general associations between metabolic markers and dispersal, dispersing and philopatric individuals showed different management of oxidative costs in response to reproductive effort (wing load manipulation). They suggest that dispersing individuals do not adjust reproductive effort even in challenging conditions, resulting in a lower reproductive output, contrary to philopatric individuals that may adjust their effort to the local conditions, possibly because of their better knowledge of the environment. Our study calls for further work investigating the differential management of oxidative constraints between individuals, especially in the context of dispersal. Study population and definition of dispersal The study was conducted during the springs 2012 and 2013 in nine forest plots on the island of Gotland, Sweden (57°07′N, 18°20′E). Collared flycatchers are hole-nesting passerine birds that readily breed in artificial nest boxes. Plots surfaces ranged from 3.0 to 15.4 ha (mean ± S.D. = 8.1 ± 3.8) and between 13 and 78 nest boxes (mean ± S.D. = 44 ± 20) were regularly spaced in each plot, resulting in an average distance between nest boxes of 37 to 48 m (mean ± S.D. = 43.0 ± 4.1). The distance between plots ranged from approximately 525 to 6000 m (mean ± S.D. = 2688 ± 1381), with only three pairs of plots out of 36 being less than 1 km distant (Additional file 1: Figure S1). Nests were visited every third day to record laying date and clutch size. Close to hatching, nests were visited daily to record hatching date and number of hatched eggs. Nestlings were crossfostered when two-days old to measure post-hatching female decisions and investment independently from prehatching effects (i.e. to control for prehatching effects in the differences observed during the nestling rearing phase; Additional file 1: Supplementary Information S3). All females were caught twice (Additional file 1: Figure S2): once 5 to 12 days (on average 7.9 ± 0.9 (SD) days) after the start of incubation, and then again when the nestlings were 5 to 16 days old (on average 8.8 ± 2.3 (SD) days). Only previously ringed females were included in this study. They were weighed to the nearest 0.1 g, aged (yearlings or older adults) based on plumage characteristics [63] and their tarsus length was measured to the nearest 0.1 mm by a single observer (C.R.). Nestlings were weighed and their tarsus length measured when 12 days old. After fledging, nests were checked for the presence of dead nestlings to record the final number of fledglings. The study plots are separated mainly by habitat unsuitable for breeding in this species (fields and pastures). This spatially fragmented configuration allows defining dispersal as a change of breeding plot between birth and the first breeding event (natal dispersal) or between two consecutive breeding events (breeding dispersal; see [64] for a discussion of this binary definition of dispersal in this population). We considered in our analyses only previously ringed individuals, whose dispersal status was defined based on movements between 2011 and 2012 for 2012 breeders and between 2012 and 2013 for 2013 breeders. We thus excluded the 143 previously unringed immigrant females out of 327 observations, i.e. 44.8 %. As in many species, dispersal was more frequent in yearlings than in older females (respectively 75 and 26 %; χ 2 1 = 26.5, P > 0.001; see Additional file 1: Table S2). Our final dataset included 97 females in 2012 and 87 females in 2013, among which 26 females were caught in both years. Wing load manipulation Female flight energy requirement was increased by cutting the two innermost primaries of each wing at their base to mimic feather loss naturally occurring at the onset of moult [65][66][67]. Upon capture during incubation, previously ringed females were alternatively assigned to the manipulated or the control group (same handling conditions but no feathers cut). Manipulated females (N = 93; 62 philopatric and 31 dispersing) did not differ from control ones (N = 91; 57 philopatric and 34 dispersing) in terms of age and main morphological and breeding characteristics (Additional file 1: Table S2). The wing load manipulation was successful at increasing energy expenditure (Additional file 1: Supplementary Information S1). Body composition Body composition was measured by hydrometry [33] for 117 females chosen randomly within each experimentby-dispersal group ( After injection, females were kept in a cloth bag during 45 to 60 min so that the isotopes equilibrate with body water [67,68]. This variation in equilibration time was unrelated to the estimates of fat-free mass calculated from this equilibration process (Spearman rank correlation test: ρ = 0.097, S = 241099, P = 0.30). After this period of time, a 50 μL blood sample was taken and females were released. To limit the amount of blood taken from each experimental female, 12 non-experimental females in 2012 and 20 in 2013 were sampled to estimate the background level of isotope enrichment for a given year (mean ± S.D.: δD = −41,9 ± 5,6 ‰ and δ 18 O = −1,7 ± 0,6 ‰ in 2012; δD = − 41,7 ± 6.0 ‰ and δ 18 O = −2.6 ± 0,6 ‰ in 2013). Blood samples were collected in heparinised glass capillaries and immediately flame-sealed. After fieldwork, samples were cryo-distillated for about 10 min under a vacuum system. Each sample was measured four times and, for each measurement, 0.1 μL distillate was injected into an elemental analyser with thermal conversion (TC/EA) connected to a continuous-flow isotope ratio mass spectrometer (IRMS DELTA V PLUS, Thermo Scientific, Waltham, MA, USA). Each measure was first corrected for drift and memory effect, then normalized to the VSMOW2/SLAP2 international scale. Samples were excluded if standard deviation exceeded 2‰ for deuterium and 0.2 ‰ for 18-oxygen on more than two out of the four analyses. The mean of the four or three kept analyses was then used as the sample measure. Total body water was calculated from the 18-oxygen labelled water using a correction factor of 1.007 for exchange. Fat-free mass was derived from total body water and the average hydration coefficient (73.2 %). Fat mass was calculated as the difference between total body mass and fat-free mass. Oxidative balance markers To measure blood markers of oxidative balance, a blood sample (max. 40 μL) was taken from the brachial vein into heparin-coated Microvettes (Sarstedt, Nümbrecht, Germany) on females captured while feeding nestlings, either immediately after capture and measurement, or after the equilibration time, with the sample for body composition, if body composition was also measured. Preliminary analyses showed no effect of this difference in sampling protocol on blood parameters. Blood samples were maintained at 5°C in the field before being centrifuged in the evening to separate plasma from red blood cells. Plasma and red blood cells were then stored at −80°C. Two markers of oxidative balance were measured: reactive oxygen metabolites and plasma antioxidant capacity. These markers have been related to reproductive output in different avian species (reviewed in [69]). ROMs are also sensitive to various behavioural and physiological stressors [34]. Because for many individuals less than 20 μL of plasma was available, only a subset of individuals was measured in duplicates or as standards on all plates for each marker, to compute the coefficients of variation (CVs) and repeatabilities. Plasma concentration of ROMs was measured using the d-ROMs test (MC0001 kit, Diacron International, Grosseto, Italy). This test measures the concentration of organic hydroperoxides, which act as precursors of longterm oxidative damage on biomolecules. 4 μL of plasma were mixed with 198 μL acidic buffer and 2 μL chromogenic substrate (N,N-diethylparaphenilendiamine) and left to incubate for 75mn at 37°C, before measuring OD at 550 nm. To control for the natural opacity of some hyperlipidemic samples, OD at 800 nm was measured and 8 samples with OD 800 > 0.100 were excluded from the analysis (three manipulated and five controls). These high CV values were partly explained by the low absolute values for ROMs concentration, which were lower than measured in other free-ranging passerines (mean ± S.E. = 0.745 ± 0.004 mM H 2 O 2 ), and thus inflating the relative measurement error [45,71]. Plasma antioxidant capacity was measured by the capacity of plasma to oppose the oxidative action of the hypochlorous acid HClO (OXY adsorbent test, MC434 kit, Diacron International, Grosseto, Italy). Vitamin E (tocopherols) and ubiquinol have only a limited reactivity toward this non-radical oxidant, but vitamin C (ascorbate), flavonoids, carotenoids (lycopene), glutathione and albumin are efficient scavengers of HClO [72][73][74][75]. Antioxidant capacity measured through the OXY test does not correlate to plasmatic uric acid concentrations [35], contrary to other measures of antioxidant capacity such as the FRAP test [35] or the TAS/TEAC test [76]. Each plasma sample was diluted at 1/100 in ultra-pure water. 5 μL diluted sample were incubated 10mn at 37°C with 200 μL HClO solution. Measure of habitat quality We controlled for habitat quality at the plot scale by including as a covariate plot breeding density, measured as the proportion of available nest boxes occupied by flycatchers in a plot during the year considered. Collared flycatchers are thought to show a preference for nest boxes over natural holes, but nest boxes are much more abundant in our plots than natural holes. Therefore the measure of breeding density should still reflect the actual proportion of available cavities occupied. A nest box was considered available to flycatchers when it was empty (i.e. contained no nest from another species, mainly great tit Parus major and blue tit Cyanistes caeruleus) up to five days after the earliest egg laying date for flycatchers in the same plot. Because nesting cavities are a major limiting resource for hole-nesting passerines such as collared flycatchers and are constrained in their availability by the earlier settlement of resident birds (i.e. tit species), measuring density relative to available nest boxes rather than all nest boxes in the plot is more likely to reflect accurately the attractiveness of a plot and the intensity of intra-specific competition for cavities. Breeding density in a plot was found to be positively correlated with individual fledging success in this population [36]. Breeding density was not correlated with plot size or the density of nest boxes in either year (Spearman rank correlation test: all P > 0.44). Plots were categorised using tertiles of breeding density only for graphical representation and in post-hoc tests (low density: < 63.32 % of available nest boxes occupied, high density: ≥ 74.07 %). Using local breeding success (i.e. average number of fledged young per nest in the plot) of control birds as an alternative measure of local habitat quality did not come to any significant link with measures of metabolism and mass, but drawing inferences from these results was hampered by different biases (detailed in Additional file 1: Supplementary Information S4). Statistical analyses We studied the effect of dispersal status and wing load manipulation on female body mass, body composition, and oxidative balance during the nestling phase. χ 2 contingency-table tests showed that the distribution of individuals among dispersal-by-manipulation groups was similar for the five physiological variables (all P > 0.759). Because different batches of the kits were used for the measure of ROMs concentration and antioxidant capacity in 2012 and 2013 but we were mostly interested in within-year responses, these values were centred and standardized within each year. A year effect was nonetheless included in the models to account for potential between-year differences in relevant biological processes. In addition to dispersal status (binomial), wing load manipulation (binomial), plot density (continuous), nestling age on the day of parental sampling (continuous), brood size at hatching (discrete) and year (binomial) were included as fixed effects as well as all pairwise interactions between dispersal status, plot density and manipulation. For ROMs concentration and plasma antioxidant capacity, adult body mass during nestling feeding (continuous) was included as a covariate, and for body mass and body composition, tarsus length (continuous) was included as a covariate. To account for the non-independence of data for individuals measured in both years and for individuals breeding in the same plot, individual and plot were included as random effects in linear mixed models. The plate was also added as a random effect when modelling ROMs concentration and antioxidant capacity. The effect of female age (yearling vs. older adult) and its interaction with dispersal status were included in preliminary analyses of oxidative balance markers, body composition, body mass and reproductive success to account for potential differences between natal and breeding dispersal, which are under different selective pressures [1]. Because they were retained in none of the final models, this clearly excludes the possibility that the differences observed between dispersing and philopatric individuals could be due to age differences and these models are not described in the results. We investigated the effect of foster mothers' dispersal status on their nestlings' body mass when 12-days old, which is a predictor of future survival and recruitment [77], and their fledging success. Body mass was investigated using a linear mixed model and fledging probability using a generalized linear mixed model with a logit link function and a binomial error distribution. Foster nest, nest of origin and plot were included as random effects. The dispersal status of the foster mother, her wing load manipulation, the foster plot density, female body mass during nestling feeding, brood size at hatching and year were included as fixed effects, as well as all pairwise interactions between dispersal status, manipulation and density. For nestling body mass, weighing time (continuous) was also included as a covariate. In a second step, we directly tested the effect of female oxidative balance on reproductive output. The foster mother antioxidant capacity and ROMs concentration as well as their interaction with dispersal status, wing load manipulation, and plot density were included to the final models of nestling body mass and fledging success obtained in the first step. Fixed effects were selected by stepwise elimination, starting with interactions. Selection criteria were the p-values of type-III F-tests for LMM, with denominator degrees of freedom calculated using Satterthwaite's approximation (R package 'lmerTest' , function anova [78]) and the p-values of type-III Wald chi-square tests for GLMM (R package 'car' , function Anova [79]). No selection was performed on random effects, which were thus kept in all final models. The complete final models, as well as the partition of the random effect variances, are given in Additional file 1: Tables S3 and S4. The homoscedasticity and normality of residuals were checked graphically. Additional file Additional file 1: Supporting detailed information about protocols (Supplementary Information S1-S4), a map of the study area ( Figure S1), a timeline of experimental procedures over the breeding season ( Figure S2), models of male body mass, oxidative balance and reproduction (Table S1), pre-manipulation values of reproductive and biometrical variables according to dispersal status and treatment group (Table S2), and detailed results of the models on females (Tables S3 and S4). (PDF 887 kb) Lionne, Antoine Gillet, Alice Goossens, Nicolas Guignard, Thomas Guyot, Audrey Le Pogam, Aurélie Pugnière, and Lise Sannipoli for their precious help on the field, and to three anonymous reviewers for their constructive comments on a previous version of this manuscript. Funding This work was supported by grants from the French Ministry of Research (PhD fellowship to CR), the University of Aberdeen (stipend to CR), the CNRS (PICS grant to BD), the L'Oréal Foundation-UNESCO "For Women in Science" program (fellowship to CR), the Région Rhône-Alpes (student mobility grant CMIRA Explora'doc to CR), the Rectors' Conference of the Swiss Universities (mobility grant to CR), the Fédération de Recherche 41 BioEnvironnement et Santé (training grant to CR), and the Journal of Experimental Biology (travel grant to CR). Availability of data and materials The dataset supporting the results of this article is available from the figshare repository (http://dx.doi.org/10.6084/m9.figshare.3408778) [80]. Authors' contributions CR, SB, FC, BD, and PB designed the study; CR, AZ, and SB designed the doubly-labelled water protocol; CR carried out the field work; CR, AZ and MA performed the laboratory analyses; CR analysed the data. All authors contributed to the writing of the manuscript. All authors have read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Consent for publication Not applicable. Ethics approval and consent to participate The data and samples were collected under permission from the Ringing Centre of the Museum in Stockholm (bird catching, measuring and ringing) and the Ethical Committee for Experiments on Animals in Sweden (experimental manipulation of flight feathers and blood taking).
v3-fos-license
2020-03-05T10:18:48.493Z
2020-01-01T00:00:00.000
213459903
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "https://www.hsj.gr/medicine/protein-microarray-analysis-of-solublebiomarkers-in-fertile-and-postmenopausalwomen-in-relation-to-obesity-bone-statusand-blood-ce.pdf", "pdf_hash": "13652b1887162a75a08300d42952668b8c4e13e7", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:141", "s2fieldsofstudy": [ "Medicine", "Biology" ], "sha1": "134c624986dfe3c4ed30b52761a0f3cd59b3c89f", "year": 2020 }
pes2o/s2orc
Protein Microarray Analysis of Soluble Biomarkers in Fertile and Postmenopausal Women in Relation to Obesity, Bone Status and Blood Cell Populations Objective: The purpose of the present study was to quantify serological biomarkers, namely adiponectin, leptin, E-selectin, ICAM-1 (intercelluar cell adhesion molecule-1), IGF-1 (insulin-like growth factor 1), MCSF (macrophage colony-stimulationg factor), IL-6, IL-10 and IL-17), in fertile and postmenopausal women. The relationship of these parameters to obesity and bone status has been analysed. Methods: 96 fertile and 107 postmenopausal women were enrolled in this study. The detection of multiple biomarkers, cytokines, growth factors, adhesion molecules and others, was performed by protein microarray. Results: Adipopnectin was up-regulated in postmenopausal control and ICAM-1 in postmenopausal obese women when compared to fertile obese women (p<0.05). Significant negative correlation was between IL-6 and age (r=-0.177, p<0.05). The soluble form of ICAM-1 was higher in women with osteopenia, but not with osteoporosis, in comparison to the control group. IL-6 was decreased in women affected by osteoporosis. Conclusion: In context to obesity, bone status and ageing, we have found alterations in adiponectin, ICAM-1, and IL-6 parameters in postmenopausal women. We identified several statistically significant correlations between soluble biomarkers and cell populations (e.g. between IGF-1 and B-lymphocytes; IL-6 and dendritic cells; IL-10 and naive T cells, etc.). Introduction Postmenopausal women are susceptible for environmental or genetic factors, and may experience a progression or initiation of diverse diseases such as obesity, osteoporosis and coronary heart disease, triggered by a systemic change in the balance of proinflammatory cytokine activity. The onset of menopause is generally associated with a hormone deficiency, which is a contributory factor for the increased incidence of osteoporosis, Health Science Journal ISSN 1791-809X cardiovascular diseases, and vasomotor disturbances. There is a large evidence that the decline in ovarian function with menopause is associated with spontaneous increases in proinflammatory cytokines (e.g. IL-1, IL-6, and TNF-α), chemokines and colony-stimulating factors. The exact mechanisms by which estrogens interferes with cytokine activity are not completely known, potentially it could be the estrogen receptor interactions with other transcriptional regulators, modulation of nitric oxide activity, antioxidative effects, plasma membrane actions, and changes in immune cell function. The cytokine metabolism changes are affected by aging, general decline in immune responses contributes to increased susceptibility to infection. The significant differences between fertile women and their postmenopausal counterparts manifested in immunoregulatory molecules, such as anti-inflammatory cytokines (IL-4, IL-10) and growth hormones (e.g. IGF-1, insulin-like growth factor 1) [1][2][3]. During the last decades, obesity and osteoporosis have become important global health problems with an increasing prevalence and a high impact on both mortality and morbidity worldwide. Age and female sex increase the risk of developing both obesity and osteoporosis, which affect millions of women. The fat-derived mediators, which include resistin, leptin, and adiponectin, affect human energy homeostasis and are involved in bone metabolism, contributing to the complex relationship between adipose and bone tissue. Obesity is associated with chronic low-grade inflammation. Some proinflammatory cytokines (TNF-α and IL-6) are the factors that negatively regulate bone metabolism in relation to obesity [4][5][6][7]. Experimental and clinical studies strongly support a link between the increased state of proinflammatory cytokine activity and postmenopausal bone loss. Bone metabolism is determined by a multitude of genetic and environmental influences. The pathogenesis of chronic disorders of these tissues is complex, but there is increasing support that the development of these disorders may be in part linked to an increased state of proinflammatory activity. The imbalance between bone formation and bone resorption is known to be responsible for postmenopausal bone loss. Clinical and molecular evidence indicates that estrogen-regulated cytokines exert regulatory effects on bone turnover implicating their role as being the primary mediators of the accelerated bone loss at menopause [8]. Proinflammatory cytokines are the most powerful stimulants of bone resorption. They directly and through the stimulation of other local factors intervene with osteoclastogenesis, from the proliferation and differentiation of the early osteoclast precursor cell to the resorption capacity and the lifespan of the mature osteoclast [9][10][11]. Peptides mediating energy homeostasis (i.e. leptin, adiponectin) may play an important role in the weight and body composition changes of postmenopausal women. Leptin and adiponectin play an important role in the regulation of body weight and energy homeostasis, and in postmenopausal women are partially determined by sexual hormones and inflammatory marker levels [12][13][14][15]. There was a strong association between elevated biomarkers of systemic inflammation and endothelial dysfunction among obese patients, insulin resistance and metabolic abnormalities. ICAM-1 (intercelluar cell adhesion molecule-1) and E-selectin expressions are generally upregulated in most inflammatory processes and represent important determinants for leukocyte recruitment [16,17]. In visceral adipose tissue, ICAM-1 and VCAM-1 (vascular cell adhesion molecule-1) expression and protein levels positively correlated with body mass index (BMI). Obesity was associated with increased adhesion molecules mRNA expression and protein levels in visceral adipose tissue [18,19]. The purpose of the present study was to quantify serological biomarkers in fertile and postmenopausal women, namely adiponectin, leptin, E-selectin, ICAM-1, IGF-1, MCSF (macrophage colony-stimulationg factor), IL-6, IL-10. and IL-17A. The relationship of these parameters to obesity and women bone status has also been analysed. Finally, we evaluated the correlation between soluble biomarkers and cell populations, in fertile and postmenopausal women samples. The data and method for the detection of cell surface markers were introduced in our previous publication [20]. Samples Peripheral blood was collected in 10-ml tubes and isolated sera were stored at -70°C. We analyzed fertile (n=96; age range 26-52 years) and postmenopausal Slovak women (n=107, age range 48-79 years). Subjects were recruited via health practitioners in accordance with ethical committee requirements. Writen informed consent was obtained from all participants before entering the study (EC SMU 06102011). Criteria for exclusion were acute physical illness or unstable physical condition, pregnancy, daily smoking for over a one year, nephropathy with glomerular filtration rate (GFR<0.75 mL/s), endocrinopathy, diabetes mellitus, active hepatitis and liver cirrhosis, cancers, anemia, severe cardiovascular disease, malabsorption diseases or conditions after gastrectomy or any part of the intestine, alcohol abuse, drug addiction, treatment with glucocorticoids, hormone therapy or hormone replacement therapy, use of calcium, vitamin D (at a dose more than 400 IU), drugs affecting obesity, the presence of metal implants and pacemaker in the body. Obesity was estimated by calculating the BMI (kg/m 2 ) and the whole body composition was measured using a total-body scanner (Lunar Prodigy Advance, USA). A bone densitometry measurement was done using dual energy X-ray absorptiometry (DXA). The results were evaluated according to the WHO expert criteria. Women were distributed into two subgroups, normal body weight (control group; BMI=20.0-29.9 kg/m 2 ) and obese subjects (BMI=≥ 30.0 kg/m 2 ). Postmenopausal women were at least five years since the beginning of menopause. Some of women were treated for hypertension (n=49), most of them postmenopausal (n=43). None of women were treated for osteopenia or osteoporosis in the past and they had no history of femur or vertebral fractures. factors, adhesion molecules and others, was performed with the multiplexed ELISA, Quantibody Human array kit. The panel of capture antibodies was printed in multiple identical arrays on a standard slide. After a blocking step, samples were incubated with the arrays, according to manufacturer´s instructions (RayBiotech, Inc., USA). Nonspecific proteins were washed off, and the arrays were incubated with a cocktail of biotinylated antibodies, directed toward 9 different parameters, followed by a streptavidin-labeled Cy3 equivalent dye. Signals were then visualized using a fluorescence laser scanner (Innoscan 900AL, Innopsys, France) at 532 nm and the data analysed using the Mapix software (Innopsys) and Quantibody Q-Analyzer software (RayBiotech, Inc.). Briefly, the median local background was subtracted from the median fluorescence of each spot and the corrected fluorescence was used to calculate the average fluorescence signal as well as the standard deviation. Statistical analysis Analysis of variables was carried out using the Mann-Whitney test with SPSS statistical software package (SPSS Inc., Chicago, IL, USA). A value of p<0.05 was considered statistically significant. A multiple linear regression model was used for the evaluation of the potential confounders, such as women age, BMI, bone mineral density (BMD), waist size and tissue fat. Each variable was entered in sequence and assessed by the retain criteria in the model. We also estimated the correlation between serological biomarkers and cell surface receptors in women samples. Pearson´s correlation coefficient was significant at the 0.05 level. The statistical analysis of the cell surface molecules expression and cell populations determination in whole blood samples, from fertile and postmenopausal women, was reported by Horváthová et al. [20]. Results Our experiments demonstrated protein microarray analysis of 9 serological biomarkers in two groups, fertile and postmenopausal women. The Table 1 show characteristic of parameters (mean and percentiles) from all women. We have found statistically significantly higher level of ICAM-1 in postmenopausal women (Figure 1). We have analysed level of biomarkers in women divided into four groups according to the fertility/postmenopausal status and BMI parameters: fertile obese, fertile control, postmenopausal obese and postmenopausal control. Adipopnectin was up-regulated in postmenopausal control compared to fertile obese women (p<0.05). ICAM-1 was increased in postmenopausal obese women when compared to fertile obese women (p<0.05) ( Table 2). The significant changes of biomarkers in the women with different BMD are shown in Figure 2a and 2b. The soluble ICAM-1 (sICAM-1) level was higher in women with osteopenia (2. group, p<0.01). IL-6 was decreased in women affected by osteoporosis (3. goup, p<0.05). In the total study group we identified association between soluble biomarkers levels and confounding variables: BMI, BMD, tissue fat, waist size, and age. Significant negative correlation was between IL-6 and age (r=-0.177, p<0.05). Finally, we tested the correlation between serological biomarkers, cell surface receptors and cell populations, in women samples. We identified several statistically significant positive association, namely IGF-1 and B-lymphocytes, IL-6 and myeloid dendritic cells (DCs), IL-10 and plasmacytoid DCs, and others. A negative correlation was found between ICAM-1 and memory effector T cells, IL-17A and naive CD3+ T-lymphocytes ( Table 3). The results from the flow cytomteric study of cell surface markers and cell populations were reported in our previous publication [20]. Discussion Recent studies have established that the onset of menopause is associated with a low systemic inflammatory status, an inflammation manifested by increased serum levels of the key proinflammatory cytokines (IL-1, IL-6 or TNF-α). The crosstalk between fat and bone involve effects of bone metabolism on energy homeostasis. The relationship between obesity and bone metabolism is complex and includes several factors [21]. The positive association between body weight and bone density has been established in numerous laboratory and clinical studies, a number of cytokines and hormones contribute to the positive association between adipose and bone tissue, acting either locally in sites where cells of the two tissues are adjacent to each other or systemically through the circulation [1,22]. The effect of obesityinduced metabolic abnormalities on BMD and osteoporosis is well established [23]. The positive influence of adipose tissue on bone tissue may be a consequence of the boosting the bone tissue load, leading to increased bone anabolism. It may also be connected with changes frequently occurring in postmenopausal women in the formation of some osteotropic factors (e.g. estrogens, androgens, leptin). The system of receptor activator of nuclear factor-kB, its ligand and osteoprotegerin (RANKL/ RANK/OPG) is the principal signaling pathway through which osteoblasts regulate the rate of the activated osteoclast pool. This effect may be achieved through a direct effect on the expressions Table 3 The correlation between biomarkers and cell surface receptors and cell populations in women samples. Each cell surface receptor of interest was analyzed by multi-color immunophenotyping using monoclonal antibodies, and using a flow cytometer. Data were reported by Horváthová et al. [20]. IGF-1=insulin-like growth factor; MCSF=macrophage-colony stimulating factor; sICAM-1=soluble intercellular cell adhesion molecule; sE-selectin=soluble E-selectin; DCs=dendritic cells; IL=interleukin; RANK=receptor activator of nuclear factor-kB. IL-6 has potent antiapoptotic properties on osteoblasts and may affect osteoclast development, both of which could lead to osteoporosis. Human studies show that IL-6 and sIL-6r levels are negatively associated with BMD, and the IL-6 gene is an independent predictor of BMD and peak bone mass [25]. Proinflammatory cytokines are frequently regulated in cascades, and the specificity for cytokines response is provided by unique cytokine receptors. Al-Daghri et al. [26] investigated the relationship between osteoporosis and inflammation, and reported that proinflammatory cytokines (IL-1b and IL-6) were significantly elevated in patients than controls. Significantly higher secretion level of IL-6 was observed in osteoporotic bone marrow mesenchymal stem cells compared with normal control [27]. It is known that IL-6 has pro-and anti-inflammatory properties, and only few cells express the IL-6 receptor and respond to IL-6 (classic signaling). All cells can be stimulated via a soluble IL-6 receptor (trans-signaling) since gp130 is ubiquitously expressed [28]. IL-6 is a pleiotropic cytokine that possesses activities that may enhance or suppress inflammatory bone destruction, the anti-inflammatory properties of IL-6 predominate in inflammatory responses. Although the mechanisms of action still need to be defined, these may involve the direct suppression of IL-1 or the induction of endogenous antagonists or inhibitors of IL-1 such as IL-1ra and IL-10 [29]. Evidence from animal and in vitro studies suggests that increases in these cytokines promote bone resorption through several mechanisms, including increasing osteoclast differentiation, activation, and survival, enhancing RANKL expression and inhibiting osteoblast survival [30]. There is evidence that IL-6 levels tend to rise during the ageing process [25,31]. In contrast to other studies, we found that IL-6 was decreased in women affected by osteoporosis, and we reported a negative correlation between IL-6 and age. This may be due to the antihypertensive therapy in a large number of postmenopausal and/or elderly osteoporotic women. Several authors demonstrate that antihypertensive treatment significantly decreased circulating levels of selected proinflammatory mediators, e.g. IL-6 [32,33]. It is also known that osteoporosis is most common among older women. Some authors decsribed that IL-6 may directly inhibit RANKL signaling in osteoclast precursor cells, and decrease osteoclast formation [34][35][36]. IL-6 inhibitors prevent bone loss and cartilage degeneration, IL-6 may be an important factor associated with osteoporosis, and was identified as a promising target for osteoporosis therapy [27,37,38]. Adiponectin has an important role in metabolism, primarily through reducing insulin resistance, and it has been shown to exert actions in the female reproductive system, including the hypothalamic-pituitary-ovarian axis and the endometrium. The expression of receptors (AdipoR1, AdipoR2) has been reported in the brain, ovaries, endometrium, and the placenta [39]. A study of Diwan et al. [40] showed that serum adiponectin was lower in obese participants compared to non-obese participants, and adiponectin is inversely associated with BMI and waist circumference. Circulating adiponectin concentrations increase with age in normal-weight middle-aged and older women [41,42]. The serum adiponectin levels in non-obese women were significantly higher than in the women with obesity or overweight [43]. We have confirmed the effect of age and obesity according to other studies, our results shown that adiponectin was increased in postmenopausal control compared to fertile obese women. Matsui et al. [44] find that serum adiponectin level in late postmenopausal women was significantly higher than that in early postmenopausal women. The literature reports indicating a link between plasma levels of adiponectin and body fat, BMD, sex hormones, and peri-and postmenopausal changes, draw attention to the possible use of adiponectin as an indicator of osteoporotic changes, suggesting that adiponectin may also modulate bone metabolism. Although several studies have shown that adiponectin has an adverse effect on bone mass, mainly by intensifying resorption, there are some authors demonstrating that this peptide may increase the proliferation and differentiation of osteoblasts, inhibit the activity of osteoclasts, and reduce bone resorption [45]. The cell adhesion molecules expression (ICAM-1, VCAM-1 and E-selectin) on endothelial cells greatly increases upon cytokine stimulation and their main function is to bind ligands present on leukocytes to promote leukocyte attachment and transendothelial migration. These molecules can be cleaved and shed from the cell surface, releasing soluble forms. The biological significance of circulating cell adhesion molecules may be manifold, including the competitive inhibition of their binding receptors located on leukocytes, the reduction of endothelial binding sites as well as signaling functions. The soluble forms of cell adhesion molecules are increasingly released from endothelial cells under inflammatory conditions and endothelial cell stimulation [46]. Obesity may induce endothelial activation or increased shedding of cell surface E-selectin that leads to subsequent increase in soluble E-selectin levels. The high serum concentrations of E-selectin closely correlated with increased total fat volume, but not with regional fat distribution [47]. In our study we noticed that ICAM-1 increased in postmenopausal obese women when compared to fertile obese women. The ICAM-1 level was higher in women with osteopenia, but not with osteoporisis, in comparison to the control group. In bone metabolism, ICAM-1 exerts important osteotropic effects by mediating cell-cell adhesion of osteoblasts and osteoclast precursors, thereby facilitating osteoclast differentiation and bone resorption. Furthermore, it has been shown that osteoblasts adhere to opposing cells through adhesion molecules, resulting in the activation of intracellular signals and leading to the production of bone-resorbing cytokines, such as TNFα, IL-1β and IL-6 [48]. Liu et al. [49] identify ICAM-1 as a regulator in the bone marrow niche. The two major processes of bone metabolism -bone formation and resorption -are regulated by cellular interactions. Osteoblasts and osteoclasts play a significant role [50] resulted that osteoblasts adhere to opposing cells through particular adhesion molecules on their surface and that the adhesion molecules on the osteoblasts not only function as glue with opposing partners but transduce activation signals that facilitate the production of bone-resorbing cytokines. Cellular adhesion of osteoblasts as well as soluble factors is significant for the regulation of bone metabolism. Bone diseases such as osteoporosis, osteoarthritis and rheumatoid arthritis affect a great proportion of individuals, with debilitating consequences in terms of pain and progressive limitation of function. Existing treatment of these pathologies has been unable to alter the natural evolution of the disease and, as such, a clearer understanding of the pathophysiology is necessary in order to generate new treatment alternatives. One therapeutic strategy could involve the targeting of ICAM-1. In bone, ICAM-1 is expressed at the surface of osteoblasts and its counter-receptor, LFA-1 (lymphocyte function associated antigen), at the surface of osteoclast precursors. ICAM-1 blockade between the osteoblast and the pre-osteoclast results in an inhibition of osteoclast recruitment and a modulation of inflammation, which could potentially help in controlling disease activity in bone pathologies [51]. IGF-1 and B-lymphocytes and naive T cells correlation IGF-1 produced by bone marrow stromal cells in the hematopoietic microenvironment plays a key role in regulating primary B lymphopoiesis [52]. IGF-1 enhances diverse aspects of bone marrow function, including lymphocyte maturation [53]. In bone marrow, administration of IGF-1 promotes the production of mature B cells [54]. IGF-1 regulates diverse aspects of T-cell, B-cell and monocyte function through its interactions with IGF-1 receptor (IGF-1R). Nearly all cells of the immune system including T-and B-lymphocytes, NK cells express IGF-1R and are therefore susceptible to the effects of IGF-1 [55]. IGF-1 levels have been shown to exhibit a positive relationship with thymopoiesis. IGF-1 play important roles in hemopoietic cell growth and differentiation and normal immune function. It promotes T cell proliferation during early activation and inhibits apoptosis of both immature and mature T cells. Most naive CD3+ and CD8+CD45RA+ T cells, display IGF-1R. IGF-1 may directly promote the survival or expansion of antigen-specific T cells through their interaction with IGF-1R [56]. We have described a significant association between IGF-1 and and B-lymphocytes, naive CD3+ and CD8+ T cells. Our results are consistent with Chen et al. [57] who found the trend of positive correlation in IGF-1 levels and percentage of naive CD8 + T cells, even if not significant. IL-6 and DCs correlation Visceral adipose tissue immune homeostasis is regulated by the crosstalk between adipocytes and DCs substes [58]. IL-6 regulates DCs differentiation in vivo [59]. After spontaneous differentiation in culture, the DCs up-regulated the cytokine levels, such as IL-6, IL-15, etc. [60]. In opposite to Zhang et al. [61] we found a positive correlation between IL-6 and the level of myeloid DCs. IL-6 production by the antigen presenting cells (APCs) is involved in the priming of naïve CD8+ T cells and formation of memory CD8+ T cells [62]. It was, therefore, concluded that obesity was a positive modulator of IL-6R and IL-6 expression in the adipose tissue which might be a contributory mechanism to induce metabolic inflammation [63]. IL-10 and DCs and naive T cells correlation DCs are key regulators of adaptive immunity with the potential to induce T cell activation/immunity or T cell suppression/tolerance [64]. IL-10 is limiting and terminating excessive T-cell responses to microbial pathogens to prevent chronic inflammation and tissue damage [65,66]. T cell activation requires at least two signals from an APCs to become fully activated. These signals (TCR/MHC, CD28/CD80. CD86) are transmitted to the nucleus of T cells, which results in the expression of activation markers at the cell surface, induction of cytokine secretion or cytotoxic function, cell proliferation, and differentiation into effector cells. In adipose tissue, similar activation steps occur with interactions between T cells and adipose tissue-resident DCs. There is strong evidence that T cell activation is induced by adipose tissue components [67]. Maturing plasmacytoid DCs rapidly and strongly up-regulate inducible costimulator ligand (ICOS-L) and specifically drive the generation of IL-10-producing T regulatory cells regardless of the activation pathway [68]. Plasmacytoid DCs have the potential to prime CD4+ T-cells to differentiate into IL-10-producing T regulatory cells through preferential expression of ICOS-L. We have found a positive correlation of IL-10 to naive CD8+ T cells and plasmacytoid DCs. Smith et al. [69] identified a regulatory loop in which IL-10 directly restricts CD8+T cell activation and function through modification of cell surface glycosylation allowing the establishment of chronic infection. Cytotoxic CD8 T cells may regulate their inflammatory effects during acute infections by producing IL-10. and thereby minimize immunopathological disease [70]. IL-17A and regulatory cells and naive T cells correlation The effector T-cell lineage shows great plasticity. Th17 cells are acknowledged to be instrumental in the response against microbial infection, but are also associated with autoimmune inflammatory processes. Human regulatory T cells can differentiate into IL-17producing cells, when stimulated by allogeneic antigen-presenting cell [71]. We observed positive relationship between IL-17A and natural regulatory cells production. Conventional T regulatory cells exert their suppressive effect via cell-cell contact-dependent and contact-independent mechanisms, and they produce antiinflammatory cytokines (IL-10. TGF-β, IL-35) [72]. Regulatory T cells are effectively recruited at sites of inflammation, it is possible that may have undesirable effects through their ability to differentiate into pathogenic Th17 in the presence of IL-6 and/or IL-23 [73]. IL-17A-producing cells may be "inflammatory" regulatory T cells (Foxp3+ Treg) in the pathological microenvironments, and may contribute to the pathogenesis of inflammatory disease through inducing inflammatory cytokines and inhibiting local T cell immunity, and in turn may mechanistically link human chronic inflammation to tumor development [74,75]. The efficient differentiation of natural regulatory T cells to Th17 occurs after in vitro stimulation of circulating naive CD4 + T cells in the presence of Th17 polarizing factors [76]. Consistent with this our data point to the reduced circulatory naive T cells and increased production of IL-17, most likely due to activation and differentiation of naive CD4 T cells to Th17 cells. MCSF and DCS and RANK correlation DCs may have an important role in bone resorption associated with various inflammatory diseases, they involve ability to produce cytokines (e.g. IL-1, IL-6, TNF-α), stimulate T cell to express RANK-L, a major differentiation factor for osteoclast precursors, and could play a role in osteoclastogenesis. Several in vitro studies support the notion that immature DCs are dependent on RANK-L and MCSF, and can differentiate into osteoclast-like cells. It is possible that additional cytokines or growth factors play a role during this process [77]. We detected an association between MCSF, DCs and RANK. MCSF is able to promotes the development of DCs in vitro and in vivo [78]. MCSF signaling is indispensable for commitment of monocyte differentiation into osteoclasts, allowing subsequent induction of osteoclastogenesis by RANKL. Modulation of MCSF receptor by TNF-α converting enzyme plays a critical role in the regulation of bifurcated differentiation of monocytes into DCs or osteoclasts [79]. Once activated by MCSF, osteoclast precursor cells then express RANK, receptor for the pro-osteoclastogenesis cytokine RANKL which is expressed on the surface of the osteoblast cells, and binding of RANKL to the RANK is required for the commitment and differentiation of the preosteoclast [80]. sICAM-1 and memory effector CD4+ T cells and DCs correlation ICAM-1 is expressed predominantly by epithelial and endothelial cells, macrophages, monocytes, B and T-lymphocytes, fibroblasts and DCs, etc. ICAM-1 might be responsible for endothelial adhesion or transmigration of DCs. Immature DCs continuously communicate with T cells in an antigen-independent manner, and this might be the mechanism used by DCs to preparing T cells for antigen encounter. These phenomena require both the ICAM-1/LFA-1 interaction and DC-released chemokines. Protective immune responses depend on the formation of immune synapses between T cells and APCs, ICAM-1 is abundantly expressed by mature DCs and is the primary ligand of the T cell integrin LFA-1 [81][82][83][84]. ICAM-1/LFA-1 interaction may influence early events in the priming of naive T cells by facilitating T cell-APCs conjugate formation and maturation of the immunological synapse, inducing T cell adhesion and movement, T cell activation and proliferation. sICAM-1 is produced either by proteolytic cleavage of extracellular cell membrane portion or directly by cells. The shedding of surface ICAM-1 is regarded as a protective mechanism to prevent from excessive leukocyte and monocyte attachment, it is part of a negative feedback loop, sICAM-1 fails to co-stimulate T cell priming [85,86]. Our results confirm a positive correlation of sICAM-1 with DCs, and a negative correlation with memory effector CD4+ T cells. Parameswaran et al. [87] report that signals provided to T cells by ICAM-1 on APCs promote their differentiation into long-lived, proliferation-competent, central memory T cells. T cells primed on ICAM-1-null APCs differentiate preferentially into an effector memory T population. ICAM-1 expression is necessary for stable T cells-APCs synapses that enhance CD69 expression, proliferation, IFN-γ secretion, and memory cell formation [88]. Memory CD4 cells may provide protection to the host via an enhanced effector cytokine response that directs other immune cells. Thus, inhibition of proliferation may be a direct and inevitable consequence of the effector functions of the memory cells; somehow during effective immune responses CD4 memory cells must strike the right balance between expansion and effector function [89,90]. We support the opinion that elevated serum levels of ICAM-1 may be associated with lowered intercellular interactions, activation and proliferation of memory effector cells [87]. The manipulation and changes in interactions governed by ICAM-1 could influence priming and potentially bolster the development and maintenance of effector-memory populations. This process may curtail the development of highly proliferative memory cells, and ICAM-1 may also enhance the antigen-driven functional exhaustion and deletion of T cells. ICAM-1 expression on T cells may permit the activated cells to cluster and receive paracrine IL-2 signals, which may push the terminal differentiation of T cells that cannot subsequently be stably maintained. Another possibility is that the lack of ICAM-1 interactions leads to inefficient separation of memory and effector cell properties as the activated cells divide [91,92]. E-selectin and DCs correlation E-selectin is important for cell trafficking to sites of inflammation in humans, and it plays a critical role in the recruitment of immune effectors cells to target inflammatory sites. Efficient extravasation of DCs to inflamed tissues is crucial in facilitating an effective immune response, but also fuels the immunopathology of several inflammatory disorder. Our results show a positive correlation of E-selectin to DCs. E-selectin is constitutively expressed on the surfaces of endothelial cells, and seems to be involved in adhesion process of blood DCs during inflammation. DCs, like other leukocytes, become activated and secrete many potent inflammatory mediators, including proinflammatory cytokines (e.g., TNF-α and IL-6). As professional APCs, DCs are the main orchestrators of the immune response, they patrol the body to capture antigens and migrate to the secondary lymphoid organs, while the internalized antigen is processed and presented to other immune cells [93][94][95]. Conclusions We demonstrated that sICAM-1 and adiponectin, differed between fertile and postmenopausal obese women, and fertile obese and postmenopausal control women, respectively. The significant changes of IL-6 and ICAM-1 serum levels were dependent on the different BMD status. We noticed a correlation between soluble biomarkers and some blood cell populations: Highlights • Adiponectin enhancement in postmenopausal control womensimultaneous effect of age and the impact of obesity when compared to fertile obese group. • sICAM-1 increase in postmenopausal women, and in women with osteopenia.
v3-fos-license
2016-05-12T22:15:10.714Z
2014-08-08T00:00:00.000
4629897
{ "extfieldsofstudy": [ "Chemistry", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0104682&type=printable", "pdf_hash": "5c256f6800400e6765ce4b722f7ea10e78d17a3e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:143", "s2fieldsofstudy": [ "Engineering", "Medicine" ], "sha1": "5c256f6800400e6765ce4b722f7ea10e78d17a3e", "year": 2014 }
pes2o/s2orc
Exhaled Aerosol Pattern Discloses Lung Structural Abnormality: A Sensitivity Study Using Computational Modeling and Fractal Analysis Background Exhaled aerosol patterns, also called aerosol fingerprints, provide clues to the health of the lung and can be used to detect disease-modified airway structures. The key is how to decode the exhaled aerosol fingerprints and retrieve the lung structural information for a non-invasive identification of respiratory diseases. Objective and Methods In this study, a CFD-fractal analysis method was developed to quantify exhaled aerosol fingerprints and applied it to one benign and three malign conditions: a tracheal carina tumor, a bronchial tumor, and asthma. Respirations of tracer aerosols of 1 µm at a flow rate of 30 L/min were simulated, with exhaled distributions recorded at the mouth. Large eddy simulations and a Lagrangian tracking approach were used to simulate respiratory airflows and aerosol dynamics. Aerosol morphometric measures such as concentration disparity, spatial distributions, and fractal analysis were applied to distinguish various exhaled aerosol patterns. Findings Utilizing physiology-based modeling, we demonstrated substantial differences in exhaled aerosol distributions among normal and pathological airways, which were suggestive of the disease location and extent. With fractal analysis, we also demonstrated that exhaled aerosol patterns exhibited fractal behavior in both the entire image and selected regions of interest. Each exhaled aerosol fingerprint exhibited distinct pattern parameters such as spatial probability, fractal dimension, lacunarity, and multifractal spectrum. Furthermore, a correlation of the diseased location and exhaled aerosol spatial distribution was established for asthma. Conclusion Aerosol-fingerprint-based breath tests disclose clues about the site and severity of lung diseases and appear to be sensitive enough to be a practical tool for diagnosis and prognosis of respiratory diseases with structural abnormalities. Introduction Accurate and early diagnosis of lung cancer is crucial to patients' survivability. For instance, patients with non-small cell lung cancer have a cure rate of more than 70% when diagnosed at Stage I whereas less than 25% if diagnosed at Stage III [1]. Conventional methods of diagnosing lung diseases or cancers include pulmonary function tests using the chest X-ray for screening, CT/PET/SPET for examining abnormal structures, and sputum cytology or lung tissue biopsy for evaluating the type and extent of the cancer [2]. These diagnosis procedures are generally reliable, but are costly and require professional operations. Moreover, some procedures are invasive and pose radiation risks to patients. Recently, an alternative diagnosis method using a patient's exhaled breath has been developed based on the premise that exhalation contains clues to many diseases [3]. Metabolic changes of growing cancer cells cause changes in the production of certain chemicals and generate a unique breath ''fingerprint'', which can be used to determine whether a disease is present. Studies have reported elevated levels of nitric oxide in relation with asthma [4], antioxidants with chronic obstructive pulmonary disease (COPD) [5], chemokines with cystic fibrosis [6], and isoprene with non-small cell lung cancer (NSCLC) [7]. Reviews on evidence supporting lung cancer diagnosis using breath tests and related developments of breath devices can be found in [8,9]. These breath devices are often small in size, noninvasive, easy to use, less expensive, and hold the promise of efficient diagnosis of lung cancer and other respiratory diseases. In spite of these advantages, gas-signature based breath devices only measure the presence and concentration of exhaled gas chemicals. They do not provide information on where these chemicals are produced (the cancer site) or the level of airway remodeling, both of which are crucial in cancer treatment planning. The site and degree of airway remodeling can be substantially different for different lung cancers (Fig. 1a). Any alternative that can locate the malignant sites in a safer and less expensive way would be highly desirable. Currently, this information can only be obtained with the help of radiological techniques such as CT or PET. A number of studies have explored the use of aerosols as a lung diagnostic tool, such as the aerosol bolus dispersion (ABD) method [10,11,12]. However, the ABD method does not provide new information about the lung function compared to existing pulmonary function tests [12]. More recently, Xi et al. [13] proposed a new aerosol breath test that has the potential to detect the disease and locate its site. This method arises from persistent observations of unique deposition patterns with respect to prescribed geometry and breathing conditions [14,15,16]. We hypothesize that each airway structure has a signature aerosol fingerprint (AFP), as opposed to the gas fingerprint discussed before. Accordingly, any deviation from the normal pattern may indicate an abnormality inside the airway, which can be retrieved with an inverse numerical approach developed by Xi et al. [17]. The subsequent questions are: how can we quantitate the exhaled AFP patterns from different airway geometries? Will the exhaled AFPs be sensitive enough to detect airway structural changes? More importantly, how can we use this information to predict the presence and location of airway abnormalities based on samples of exhaled aerosol profiles? In this study, fractal analysis will be implemented to quantitate the complex patterns of exhaled aerosol fingerprints. These patterns, although visually distinguishable, are resistant to automatic quantitation and comparison. Since its introduction by Mandelbrot [18], fractal analysis has been shown to be a robust and powerful tool to measure the subtle changes in biological morphology [19], vasculature [20], neural networks [21], metal structures [22,23], landscapes [24], and even the stock market [25]. Fractal geometry provides a simple model to describe complex systems with a minimum number of parameters (e.g., fractal dimension specifying the degree of irregularity or complexity). The conducting airways of human lungs are ''space filling'' fractal structures [26,27]. Studies of the bronchial tree have shown that the mean diameter of the airways is exponentially related to the order of branching [28] with a fractal dimension of 1.57 [29]. Considering that tracer particles sequentially fill and empty the fractal lung during inhalation and exhalation, it is conceivable that the exhaled aerosol profiles also exhibit fractal characteristics and are thus amenable to fractal analysis. However, fractal analysis has two inherent limitations. First, fractal dimension describes the complexity of an image by quantifying how much space is filled by the particles; however, it does not explain how the space is filled by these particles. To address this limitation, lacunarity is also evaluated to describe the spatial pattern of exhaled aerosol fingerprints. Lacunarity is a measure of heterogeneity that describes the distribution of empty spaces surrounding the particles. Patterns with high lacunarity are more heterogeneous while those with low lacunarity are more homogenous or rotationally invariant. Lacunarity adds significantly to the description of an image with a known fractal dimension, in that it describes the empty spaces in the image, and thus describes how the particles fill the space. Therefore, lacunarity can be used to differentiate aerosol patterns with similar fractal dimensions, which may fill the space differently. The second limitation of simple fractal analysis is that one single fractal dimension alone may not adequately describe the complex patterns of exhaled aerosol fingerprints, which consist of different scales or details [13]. Considering that airflows within the lungs result from a multiplicative cascade of non-linear processes [30], even small variations in the lung morphology could appreciably alter the exhaled aerosol patterns. Compared to monofractal dimension analysis, multifractal analysis provides more information about the space filling properties and thus will be more appropriate to quantify the exhaled aerosol profiles. Reviews of monofractals and multifractals can be found in [31]. The objective of this study is to assess the feasibility of aerosol breath tests in diagnosing the location and severity of obstructive lung diseases. We will first evaluate the sensitivity of exhaled AFPs to airway modifications by computationally testing four lung models (one benign and three malign conditions). To simulate a breath test, aerosols are first inhaled and subsequently exhaled, with exit aerosol profiles being captured at the mouth. The exhaled AFPs will then be quantified using fractal and lacunarity analysis to yield a more compact and simplified representation of the particle spatial distributions. This will help to better correlate the aerosol patterns and airway diseases. An automated pipeline of pattern characterization and classification can also expedite processing of large amounts of images in the future. Construction of airway models with normal and malign conditions To evaluate the sensitivity of exhaled aerosol profiles to morphology variations of the upper respiratory airway, four models were considered in this study. The first one, Model A, extended from the mouth to the bronchial bifurcations G6 and was originally developed by Xi and Longest [14] based on MRI images of a healthy adult male. Details of the airway geometry, including the construction procedures and critical dimensions, could be found in Xi and Longest [14,32] and were described briefly as follows. The multi-slice MRI scans of the subject were segmented using MIMICS (Materialise, Ann Arbor, MI) into a 3-D model, which was further converted into a set of contours that defined the airways of interest. Based on these contours, an internal surface geometry was constructed in Gambit. Surface smoothing was performed to the least extent to preserve the airway anatomical details as much as possible. The resulting model was intended to represent a normal airway and was further modified to generate the other three models with different abnormalities in the tracheobronchial (TB) region, as shown in Fig. 1b. Model B had a 10 mm tumor located at the tracheal carina ridge. Model C had a smaller sized tumor (4 mm) at the segmental bronchi in the left lower lobe. The tumor-to-airway diameter ratios selected here were consistent with those adopted by Segal et al. [33], who studied the impact of tumor size and locations in TB airways. Model D had two severely constricted segmental bronchi in the left upper lobe (number 3 and 4 in Fig. 1b), and represented asthmatic airways. Morphologically, Model B represents a large airway obstruction, Model C a small airway obstruction, and Model D a severe flow perturbation. The detailed information of location and size of the four models was listed in Table 1. Numerical breath test protocol There were two steps in this protocol: image acquisition via computational modeling and image analysis via fractal and lacunarity analysis. In light of computational modeling to acquire exhaled aerosol fingerprints (AFPs), both inhalation and exhalation were simulated in this study, with a bolus of tracer particles first inhaled slowly and then exhaled. It is assumed that inhaled ambient air or mainstream smoke will enter the mouth-throat (MT) geometry with a relatively blunt velocity profile, which can be defined as where r is the inlet radial coordinate, u m is the mean velocity and R is the radius of the inlet. This profile is similar to a constant velocity inlet, but provides a smooth transition to the no-slip wall condition. A stochastic model was used to generate the inlet particle profiles with initial particle velocities matching the local fluid velocities. Five particle inlet profiles were simulated for each model. During inhalation, atmospheric and vacuum pressures were assumed at the mouth and bronchial outlets, respectively. Aerosols were released at the mouth, and recorded at the outlets. During exhalation, the recorded bronchial particle profiles were specified as the inlet conditions and were tracked with expiratory airflow. The exhaled aerosols were collected at the mouth. The exhaled particle profiles (or AFPs) were then visualized and analyzed in order to classify the AFP patterns among airway models with normal and malign conditions. The computationally predicted results were shown in the form of particle locations, particle concentration distribution, and relative concentration to the normal condition. The resultant images were then quantitated using (1) statistical distribution in translational, radial, and circumferential directions to describe the spatial pattern of the AFPs, (2) regional and localized fractal analysis, (3) lacunarity analysis, and (4) regional and localized multifractal analysis. The involved algorithms will be explained below. Computational fluid-particle transport models Flows in this study were assumed to be isothermal and incompressible. Continuous inhalation and normal breathing conditions were assumed for all simulations. A large eddy simulation approach LES-WALE model was used to solve the flow field, which included a resolved part and a sub-grid part. The resolved part of the field represented the 'large' eddies and were solved directly, while the sub-grid part of the velocity represented the 'small scales' whose effect on the resolved field was included through the sub-grid-scale (SGS) model. The LES-WALE model had been shown to produce almost no eddy-viscosity in wallbounded laminar flows and was therefore capable of reproducing the laminar-to-turbulent transition [34]. A more detailed mathematical description of the LES-WALE model was given in Nicoud and Ducros [34]. The trajectories of monodispersed particles with a diameter (d p ) were calculated on a Lagrangian basis by directly integrating an appropriate form of the particle transport equation [14], where v i is the particle velocity, u i is the local fluid velocity, and t p (i.e., r p d p 2 /18 m) is the characteristic time required for a particle to respond to changes in the flow field. The particle residence time t p is defined as r p d p 2 /18 m, with m being the air viscosity and d p the particle diameter. The drag factor f is based on the expression of Morsi and Alexander [35]. The Cunningham correction factor C c was computed using the expression of Allen and Raabe [36]. The effect of Brownian motion was considered [37] due to the small particle size in this study. In-house user-defined functions (UDFs) were implemented that considered the near-wall damping effect [32] and the finite particle inertial effect [38]. In our previous studies, the UDF-enhanced Lagrangian model had been shown to provide close agreement with experimental deposition data in upper respiratory airways for both submicrometer [39] and micrometer particles [16,40,41]. The computational meshes of the four airway models were generated with ANSYS ICEM CFD (Ansys, Inc). Due to the high complexity of the model geometries, unstructured tetrahedral meshes were generated with high-resolution prismatic cells in the near-wall region. A grid sensitivity analysis was conducted by testing the effects of different mesh densities with approximately 600 k, 1.2 million, 2.0 million and 3.2 million control volumes while keeping the near-wall cell height constant at 0.05 mm. Since the changes in both total and sub-regional depositions were less than 1% when increasing mesh size from 2 million to 3.2 million, the final grid for reporting flow field and deposition conditions consisted of approximately 2 million cells with a thin five-layer pentahedral grid in the near-wall region and a first near-wall cell height of 0.05 mm. Fractal and lacunarity analysis Box counting fractal dimension (D B ). D B was a measure of increasing details with decreasing resolution scales. It was calculated as the slope of the regression line of the log-log plot of box size (or scale,e) and box count N e , which is the number of grid boxes containing pixels. Lacunarity (L ): As a measure of heterogeneity, lacunarity was calculated as where s is the standard deviation, m is the number of pixels per box at size e, and E is the total number of box sizes [19,42]. The sliding box algorithm was implemented to calculate the lacunarity L [19]. The resulting lacunarity was independent of the D B, and patterns indistinguishable by their D B were often distinguishable by L , or vice versa [19]. Multifractal spectrum f(a) , a: The multifractal analysis relies on the fact that natural systems often possess rich scaling properties. To calculate the multifractal dimensions, a normalized measure m i (q, e) was constructed with a family of scaling exponents, q, to explore different regions of the singularity measure, For q.1, m i (q, e) amplified the more singular measure, while for q,1, it accentuated the less singular regions. The singularity strength a (q) and the multifractal spectrum function f(a) with respect to m i (q, ") were given by The plot f(a),a constituted the multifractal spectrum. To calculate the lacunarity L and multifractal parameters a and f(a), ImageJ with FracLac plugin was used [43]. Generalized Fractal Dimension (D q ): The generalized dimension, D q , addressed how mass varied with e and provided a direct measurement of the fractal properties of the image, which was defined as below: The plot of Dq,q tended to be a decreasing sigmoidal for multifractals and horizontal for non-or monofractals. Statistical analysis Exhaled aerosol data are presented as mean 6 standard deviation (SD) based on the five breath tests for each model. Data analysis was performed using the SAS statistical package (SAS Institute, Inc.). A Kruskal-Wallis one-way analysis of variance test was used to compare the difference in exhaled aerosol patterns of different models in terms of their fractal dimensions and lacunarities. A difference was considered statistically significant if p was,0.05. Figure 2 shows the comparison of expiratory airflows among the four models. The presence of an airway obstruction noticeably alters the airflow field near the diseased site as shown by the distorted streamlines and velocity distributions (top panel in Fig. 2). The variation of the velocity field is further visualized using the cross-sectional particle distributions (middle panel) close to the carina (Slice A-A'). The tracer particles have a diameter of 1 mm and closely follow the airflow. It is observed that both the location and size of the airway obstruction influence the exhaled flows, which give rise to different expiratory particle patterns. The lower panel of Fig. 2 shows the velocity distributions at Slice A-A' of the four models in both horizontal (Z) and transverse (Y) directions. Compared to the control case (Model A), the most dramatic difference is noted in Model B (carina tumor) which has the largest tumor size and is closest to the sampling plane A-A'. In contrast, Model C (segmental bronchial tumor) gives very similar velocity profiles due to its smaller tumor size and larger distance from Slice A-A'. However, this similar airflow does not necessarily imply similar particle profiles, which depends on both local airflows and upstream particle histories [44]. The time-integrative nature of the particle behaviors can be seen clearly by comparing Model A and C in terms of their similar velocity profiles and different particle distributions at Slice A-A' (Fig. 2). Lower velocities are observed in Model D (Fig. 2) due to the severely constricted segmental bronchus and associated higher flow resistances. There is a spot that is devoid of particles in the top right corner of Model D, which is presumably caused by the two constricted bronchus. The difference in airflows gradually diminishes as they move towards the mouth; however, the particle profiles are still different due to their time-integrative properties. 2-D comparison of exhaled aerosol-fingerprints (AFPs) The exhaled particles collect into a pattern that is unique to the lung structure and can be considered the ''fingerprint'' of that lung. The first row of Fig. 3 displays particle distributions collected at the mouth for an aerosol size of 1 mm and a flow rate of 30 L/ min. Overall, each of the four models exhibits a pair of vortexes and an asymmetrical aerosol distribution, the latter of which may stem from the asymmetry of the right and left lungs. However, discrepancies in aerosol distributions are still apparent among the four models. Compared to Model A, Model B (tracheal carina tumor) and C (left segmental bronchial tumor) both exhibit very different patterns. First, the two vortexes and the central stripe in Models B and C are much less defined. The left vortex almost vanishes in Model B. Secondly, for Models B and C with obstructive tumors, an increased portion of aerosols are trapped in the airway due to elevated inertia impaction. However, even though the particle patterns of Models B and C look similar, careful examinations still reveals discernible differences. The presence of a carina tumor (Model B) disturbs the aerosol distribution in both the lower-left and lower-right regions, while the influence from the left segmental bronchial tumor is mainly limited to the lower-left region (top panel in Fig. 3). For Model D with two severely constricted bronchi, the exhaled aerosol profile resembles that of Model A, except for one crescent-shaped region at the upper left corner that is devoid of particles. This observation clearly corroborates the hypothesis that the exhaled aerosol distribution is the fingerprint of the lung structure, which can be used to probe structure remodeling by lung tumors and other respiratory diseases. Even though the particle distributions look different among the four models, they may not accurately represent the concentration distribution due to particle overlapping. The second row of Fig. 3 shows the relative particle concentrations (i.e., the ratio of local particle concentration to the overall concentration) with red representing high concentrations. For a given model, the particle (first row) and concentration (second row) distributions resemble each other in terms of the overall pattern. However, the concentration image is able to identify the peak particle accumulations (red color), which the particle distribution image is incapable of identifying. To highlight the variation of AFPs with different model geometries, the relative concentrations compared to the normal condition (baseline) are plotted in the third row of Fig. 3. As such, image A-A (not shown) should have zero concentration everywhere. The other three images (B-A, C-A, D-A) exhibit both positive and negative values, with the red color representing the peak concentration of the abnormal case, while the blue color representing the peak concentration of the baseline case. Therefore, if two adjacent spots have a similar pattern but in opposite colors (red vs blue), the shifting between these two spots can be used to distinguish that disease. Considering the red and blue spots at the top of D-A image, the constricted bronchi in Model D caused the blue spot in the control case to shift toward the top-left (Fig. 3). Spatial distribution of exhaled aerosol particles Even though it is effective to differentiate exhaled AFP patterns visually, this process can be slow if there are a large number of images. In order to develop an automated pipeline to quantify exhaled AFP profiles, we explore multiple analytical approaches to distinguish the complexity between different exhaled aerosol profiles. Automated methods that have been tested include spatial scanning, fractal dimension, lacunarity analysis, and multifractal spectra. Figure 4 shows the statistical distributions of exhaled particles in different directions. Taking Fig. 4a as an example, each point hereof represents the probability that the exhaled particles could be found at a specified horizontal distance x/X. In this example, the AFP image has been evenly divided into 50 bins along the horizontal direction. The number of particles in each bin is counted and normalized by the total exhaled particle numbers and the area of the bin, yielding the probability of particle distribution (%/mm 2 ) at x/X. This is equivalent to scanning the AFP image in x direction with a scan resolution of D/50, with D being the diameter of the image. In order to quantify the spatial characteristics of the exhaled particle patterns, the images are scanned in four directions: horizontal, vertical, radial, and circumferential (rose plot). Generally, each airway model considered in this study exhibits a unique profile of spatial distribution probabilities, and therefore is applicable to supplement the classification of airway anomalies. Considering Figs. 4a and 4b, two spikes are observed for Model D (asthma) at x/X<0.2 (Fig. 4a) and z/Z<0.65 (Fig. 4b), which collectively point to the hot spot located at the normalized Cartesian coordinate (0.2, 0.65) as shown in Fig. 3 (concentration distribution D). The same hot spot also manifest itself as a spike in Fig. 4c at r/R<0.7, and in Fig. 4d at h<70u. This indicates that directional particle distribution is a sensitive index of spatial pattern which could possibly be quantified with two mutually orthogonal directions. Fractal, lacunarity, and multifractal analysis 3.4.1 Fractal dimension analysis. Monofractal analysis of exhaled aerosols using the box counting method is shown in Fig. 5 for the four models. We consider the fractal dimensions from two perspectives: in the entire sample image and in the selected region of interest (ROI), as illustrated in Fig. 5a. The correlation factor of data linear regression is 0.978 for the entire region, indicating that the particle distribution exhibits a statistically fractal feature (Fig. 5a). The local distribution also exhibits a fractal feature (R 2 = 0.964), except that it has a smaller fractal dimension (FD ROI = 1.274) and is less complex than that of the entire region (FD Entire = 1.4423). Comparison of FD based on the entire region among the four models is shown in Fig. 5b. The FD standard deviation for each model has been calculated from five test cases with stochastically generated inlet particle profiles (n = 5). Significance is indicated by *(p,0.05) and **(p,0.01). The deviation of FD from the normal case is consistent with the level of airway remodeling even though small in magnitude. Model C (small bronchial tumor) and Model B (large tumor) cause insignificant variation in FD while Model D (asthma) causes a larger FD variation (p,0.05). The lower FD (1.4108) of Model D versus the control case (1.4423) corroborates the prior report that asthmatic lungs have decreased FD values compared to non-asthma controls [45]. In part, this decrease might be explained by the ventilation loss due to airway constrictions. In contrast to the small variations of entire-region-based FDs, the variation of local FDs is more pronounced. For the region of interest (ROI) (red square in Fig. 5a), the local FD of Model B is significantly lower than the control (p,0.01). In view of the proximity of FD values for Model A, C, and D, it is possible that this selected ROI is largely affected by the carina tumor and not by the bronchial tumor (B) or asthmatic bronchi (D). Local FD distribution on a 666 grid is displayed in Fig. 5d. For each grid, the color code is based on the FD ratio b(i) = FD(i)/FD(A), i = B, C and D. Again, the color patterns for the four models are different from each other, and are unique to each airway abnormality. 3.4.2 Lacunarity analysis. In general, measures of lacunarity correspond to visual impressions of uniformity, where low lacunarity implies homogeneity and high lacunarity implies heterogeneity. From Fig. 6a, the differences of lacunarity among the four models are more pronounced than those of fractal dimensions for both the entire sample and selected region of interest (ROI). Specifically, the lacunarity of Model B differs significantly (P,0.01) from the control case (Model A) even though their fractal dimensions are similar. As discussed before, fractal dimension and lacunarity are statistical indexes of complexity and heterogeneity, respectively, and do not necessarily correlate to each other. Knowing lacunarity helps to separate exhaled aerosol images with close fractal dimensions. Comparing ROI-based images in Fig. 6b, Model B (carinar tumor) has the largest lacunarity (Fig. 6b) and the smallest fractal dimension (Fig. 5b) among the four models while the variations among the other three models are insignificant. This suggests a strong correlation between the carina tumor to large variations of fractal dimension and lacunarity in the ROI, which will be further analyzed using multifractal spectrum analysis in the following section. 3.4.3 Multifractal analysis. Multifractal patterns are intrinsically more complex than monofractals. The visually patchiness of exhaled aerosol profiles suggests that different scaling properties may exist. Figure 7 shows the spectra of generalized dimension D q versus the scaling exponent q for both the entire region and selected ROI. In all cases, D q is a monotonically decreasing function of q, indicating that the exhaled aerosol patterns exhibit multifractal features [46] and are more appropriately described by the multifractal spectra rather than by the box-counting fractal dimensions alone. In the case of a monofractal, the D q spectrum should be a constant line, which is not observed in Fig. 7. Moreover, the D q spectra of the ROI cases are flatter than those of the entire region, suggesting that local patterns are more like monofractals (even though they are not), while the overall patterns are more like multifractals. The multifractal spectra for the gray-sale images of the exhaled aerosol concentration profiles are shown in Fig. 8. The aerosol concentration images are first shown as the 3-D plots (Figs. 8a-d), which exhibit very different patterns among the four models. Considering the entire-region analysis (Fig. 8e), a small geometric deviation such as bronchial tumor (Model C) leads to a similar profile as that of the control case, while large geometric variations leads to spectra profiles that are much different from the control, which is consistent with Fig. 5. For the selected ROI, the spectra are more symmetrical than those of the entire region. The ROIbased spectra also have a smaller range of f(a) as well as a narrower range of a compared to those of the entire image suggesting lower multifractality of the ROI images. Particularly, the ROI-based spectrum for Model B has the smallest ranges of f(a) and a (Fig. 8f), which also has the smallest monofractal dimension (Fig. 5c) and largest lacunarity (Fig. 6b). This is in line with results in previous studies [20,47] that a pattern with a more asymmetric spectrum and a narrower range of a generally has higher density and lower lacunarity. Examples of such patterns include soils with massive structures and low porosity [47] and vascular beds with high complexity and lower emptiness [20]. In this study, the exhaled aerosol profiles of the entire region are more complex and heterogeneous than that of the ROI. Apparent differences in the ROI spectra are also observed among the four models (Fig. 8f), lending further evidence that multifractal analysis might be adequate in identifying the geometry-associated aerosol variations. AFPs for asthma with varying severity To test whether the exhaled AFPs are sensitive enough to distinguish the pathologic states of respiratory diseases, four levels of airway constrictions (D0, D1, D2, D3) caused by asthma have been considered, as illustrated in Figs. 9a and 9c. Exhaled particle distributions are shown in Fig. 9b. It is noted that the crescentshaped void at the upper-left corner becomes more obvious with increasing severities. To further test the sensitivity of the aerosol voids to the disease severity, particles are released only from the ROI and their exhaled locations are plotted in red (Fig. 9A). For the zero-level constriction (D0), red particles are observed enclosing the region that is otherwise aerosol-void for asthmatic scenarios (D1-3). With increasing severity, red particle contours shrink progressively in space (Fig. 9B), with drastically elevated concentration in certain regions (solid arrow) and decreased concentration in other regions (hollow arrow) (Fig. 9D), reflecting the asthma condition at the ROI. As a result, these tagged particles could not only be used to evaluate the severity of airway constriction, but also to discover the location of the disease. Fractal analyses of exhaled AFPs with asthma of varying severities are shown in Fig. 9E. Again, the standard deviation for each case has been calculated from five tests with different stochastically generated inlet particle profiles. For both the entire region and selected ROI (upper left), there is a progressive decline in FD for airways with increasing severities. Concerning the ROI, the FD and lacunarity of each asthma case (D123) are different from the control D0. In light of the multifractal spectra of the ROI, increasing airway constriction leads to continuous narrowing of both a and f(a), which clearly distinguishes the four asthma states considered in this study. Discussion The use of multiple analytical techniques is becoming increasingly pertinent when exploring complex biological systems. In this study, we demonstrated the feasibility of a coupled CFD-fractal approach to quantitatively distinguish the exhaled AFP patterns from healthy and diseased lung models. Physiology-based numerical modeling has been employed to predict the exhaled aerosol patterns (fingerprints), which revealed notable variations of exhaled aerosol fingerprints among the four models in both visual patterns and fractal measures. Compared to our previous study [13] that was limited to visual patterns and a qualitative manner only, the current study quantified the exhaled AFP patterns by exploring multiple analytical approaches, including concentration disparity, spatial scanning, monofractal, lacunarity, and multifractal analysis. These approaches collectively generated a feature vector of the AFP pattern, which could be further used for automated classification of the AFPs and diseases. The concentration disparity provides a more informative comparison than the particle distribution presented in the previous study [13]. The spatial scanning can quantify particle distributions in either (x, y) or (r, q ) directions and is a sensitive index of the spatial pattern. In light of the fractal analysis, the ROI-based fractal dimension and lacunarity can be significantly correlated with the severity of airway obstruction. In addition, multifractality can identify the subtle differences in the exhaled aerosol profiles. The capacity to differentiate not only gross differences but also subtleties among aerosol fingerprints is highly desirable. It provides a useful tool in decoding the complexity of such fingerprints and thus can be used to monitor the pathogenesis of an airway disease or track the therapeutic outcomes of an intervention protocol. Besides, one particular advantage of the physiology-based modeling is that the results will not be confounded by any other factors except the factor of interest. The fractal dimensions (FD) of exhaled AFPs are observed to decrease with increasing disease severities. A decrease in FD indicates a loss in complexity, which reflects a decrease in spacefilling ability of diseased airways. Airway remodeling is a consequence of chronic injury and repair, whose site and severity can vary significantly. The structural variations considered in this study (Table 1) are small and represent a conservative evaluation of the performance of the proposed AFP-based breath test. For example, airway constrictions in some asthma patients are much more severe than the model D in this study. Fatal asthma can have 44% closure of the whole airway [45]. In this sense, a more pronounced variation of fractal measures is expected in clinical practices, which should be even more useful for diagnostic purposes. The AFP-based breath test is envisioned to be similar to a personal air sampling system. The patient inhales tracer particles and then exhales. The exhaled particles are collected on a fibrous or pored membrane filter [48] for a prescribed sampling period. To minimize artifacts due to variations in breathing or body posture, the breath rate and body posture should be standardized during the test. Exhaled profiles or fingerprint patterns can be quantified with different approaches to distinguish normal versus diseased lungs. This can be achieved via direct image processing, microscope-based counting, fluorescent intensity measurement, or chemical quantification [49]. Particle counting with a microscopy has been used to determine total and local depositions of aerosols in a bifurcating geometry [50]. The concentration of fluorescent tracers can be measured with a fluorometer [51]. The other alternative is to use chemical sensitive tracer particles which change colors upon contacting the filter and generate a pattern that is specific to the respiratory structure of concern [52]. Ideally, tracer aerosols for the breath test should be non-invasive, sensitive, easy to analyze, disease-specific, and repeatable. Results of this pilot study suggest that the first three criteria are attainable. The eventual breath test will consist of two steps to detect and localize the disease: (1) extraction of image features and (2) classification between images and diseases. The methodology presented in this study has focused on feature extraction only, which will accurately and compactly quantify the image. However, by itself it is not enough to identify or trace back to the disease site. This requires a database of image-diseases and a classification method that correlates the images with their respective diseases. The extracted feature vector will be used as the input to train the clarification function f(x) that correlates the image and diseases. Among the many options of classification methods such as neural network, machine vision, and support vector machine (SVM), the SVM algorithm will be selected for future classification studies due to its accuracy and easy-to-use features [53]. To this aim, the open-source software package the Library for Support Vector Machines (LIBSVM) could be adopted for data classification [54]. As a proof-of-concept study, we employed ideal breathing conditions to assess the feasibility of the proposed breath test, e.g.: same flow velocities for both normal and pathological lung models. It is noted that a patient with respiratory distress may breathes differently, which can alter the exhaled AFP patterns. To accurately detect an airway abnormality, it is necessary that the pattern of exhaled AFP persists over a certain range of breathing conditions even though its pattern details may vary. Practically, the respiration bias can be minimized by instructing the patient to inhale steadily and by activating the exhalation sampling only when the patient breathes within the acceptable range. Future studies should also be conducted to determine the sensitivity of the AFP-based breath test under various breathing conditions. Quantifying the respiration effect will help to determine the detection sensitivity, the tolerance of breathing deviations, and the optimal breathing maneuvers for the breath test [55]. Other limitations of this study include the assumption of steady flows, no humidity, no charge effect, rigid airway walls, and small sample size. Previous studies have highlighted the significance of transient breathing [56], hygroscopic growth [57,58], particle charge effects [59,60], dynamic glottis [61], and intersubjective variability [62,63]. Generally, structure variations are also accompanied by tissue property and functional changes, which are expected to result in a larger magnitude of fractal changes. Concerning the sample size, the geometry models considered are from limited subjects and do not account for intersubject variability. Each of these factors affects the realism of the model predictions in relation to actual performance of the aerosol breath test. These limitations should be addressed in order to develop more physically realistic models. Future numerical studies with more realistic models and a larger sample size, as well as complementary in vitro tests, are necessary to advance our knowledge of the feasibility and efficiency of this new lung diagnosis protocol.
v3-fos-license
2023-09-24T15:53:03.681Z
2023-09-19T00:00:00.000
262162645
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1244004/pdf?isPublishedV2=False", "pdf_hash": "4bfedb4ffd4723d047d2a2c2febcea4d2b55345e", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:144", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "sha1": "8a68a57721e41807c62df96328e5ae9a48818673", "year": 2023 }
pes2o/s2orc
Heat stress exposure cause alterations in intestinal microbiota, transcriptome, and metabolome of broilers Introduction Heat stress can affect the production of poultry through complex interactions between genes, metabolites and microorganisms. At present, it is unclear how heat stress affects genetic, metabolic and microbial changes in poultry, as well as the complex interactions between them. Methods Thus, at 28  days of age a total of 200 Arbor Acres broilers with similar body weights were randomly divided into the control (CON) and heat stress treatment (HS). There were 5 replicates in CON and HS, respectively, 20 per replication. From the 28–42  days, the HS was kept at 31 ± 1°C (9:00–17:00, 8 h) and other time was maintained at 21 ± 1°C as in the CON. At the 42nd day experiment, we calculated the growth performance (n = 8) of broilers and collected 3 and 6 cecal tissues for transcriptomic and metabolomic investigation and 4 cecal contents for metagenomic investigation of each treatment. Results and discussion The results indicate that heat stress significantly reduced the average daily gain and body weight of broilers (value of p < 0.05). Transcriptome KEGG enrichment showed that the differential genes were mainly enriched in the NF-kB signaling pathway. Metabolomics results showed that KEGG enrichment showed that the differential metabolites were mainly enriched in the mTOR signaling pathway. 16S rDNA amplicon sequencing results indicated that heat stress increased the relative abundance of Proteobacteria decreased the relative abundance of Firmicutes. Multi-omics analysis showed that the co-participating pathway of differential genes, metabolites and microorganisms KEGG enrichment was purine metabolism. Pearson correlation analysis found that ornithine was positively correlated with SULT1C3, GSTT1L and g_Lactobacillus, and negatively correlated with CALB1. PE was negatively correlated with CALB1 and CHAC1, and positively with g_Alistipes. In conclusion, heat stress can generate large amounts of reactive oxygen and increase the types of harmful bacteria, reduce intestinal nutrient absorption and antioxidant capacity, and thereby damage intestinal health and immune function, and reduce growth performance indicators. This biological process is manifested in the complex regulation, providing a foundational theoretical basis for solving the problem of heat stress. Introduction Chicken is the second largest type of meat product in China, except for pork.The annual sales of broilers in China can reach 10.5 billion, and the per capita consumption has been Liu et al. 10.3389/fmicb.2023.1244004Frontiers in Microbiology 02 frontiersin.orgincreasing year by year.Compared to laying hens, broilers can provide humans with more high-quality dietary protein and meat products (Resnyk et al., 2017).Heat stress is one of the biggest challenges facing global animal husbandry.The increase in ambient temperature and humidity will affect the production of animals in summer and will cause economic losses in severe cases (Salem et al., 2022).Heat stress causes the high mortality rate and severe economic losses estimated at 240 million United States dollars per year in the poultry sector (St-Pierre et al., 2003), representing, for example, about 7% of total HS-caused losses in the French livestock industry in 2003 (Nardone et al., 2010).According to the time of heat stress, it can be divided into acute heat stress (<7 days) and chronic heat stress (≥7 days); According to the temperature of stress treatment, it can be divided into cyclic heat stress and sustained heat stress (Gonzalez-Esquerra and Leeson, 2006).Research has shown that acute heat stress can cause a heat shock response, leading to rapid initiation of heat shock protein synthesis and rapid changes in gene expression, while chronic heat stress can cause larger scale changes (Xie et al., 2014).In addition, Xie et al. (2015) reported that chronic heat stress can lead to tissue damage.Research has found that the cyclic heat stress pattern is more in line with the characteristics of summer temperature and closer to the actual living environment (Al Qaisi et al., 2022).Heat stress can affect intestinal microflora's activity and stimulate the hypothalamus's feeding center to reduce its excitability, leading to the decline of broiler growth performance and even death (Wang, 2013).In addition, broilers will respond to changes in the external environment by adjusting their metabolic levels, which can detect changes in gene, metabolite and microorganism levels through behavioral changes and the use of transcriptome, metabolome and 16S rDNA amplicon sequencing technology (Jastrebski et al., 2017).It has revealed the influence of heat stress on broilers' cecum by the transcriptome, metabolome and 16S rDNA amplicon technology.Transcriptome sequencing (RNA-Seq) reflects the expression of a specific cell or tissue gene and has the advantage of high sensitivity.RNA-Seq has been successfully applied to study the molecular basis of complex traits, such as feed efficiency and myopathy, and evaluate the molecular response to nutritional therapy and important aspects of immunity and disease resistance (Zampiga et al., 2018).In addition, in terms of heat stress, RNA-Seq analysis was used to identify key genes that respond to the heat stress induced volatilization in laying hens, such as PDK4 and FGA (Wang et al., 2021).Transcriptome analysis identified potential target genes, especially those involved in cell migration and immune signaling responses to heat stress, which can inform future research on heat stress in broilers and could prove useful for improving disease resistance (Monson et al., 2018).Metabolomics is a sequencing technology that studies the body's physiological changes and pathological characteristics from a dynamic perspective through the qualitative and quantitative determination of metabolites (Nicholson et al., 1999).It can build a direct correlation between metabolites and biological phenotypes, dynamically track and analyze the metabolites of animal bodies, help to analyze the relationship between phenotypes and genetics, environmental and other factors, and provide a basis for the improvement and breeding of economic traits.Metabolomics is widely used to understand the changes in organism metabolism, such as metabolism, sugar, lipid, amino acid metabolism, and meat quality evaluation (Zhang Y. et al., 2023;Zhang F. et al., 2023;Zhang X. et al., 2023).Zampiga et al. (2021) found that chronic heat stress could regulate the changes of metabolites in breast muscle and plasma of broilers by metabolome, and found metabolites that might play an important role in regulating Energy homeostasis of the body.According to metabolomic analysis, heat stress has caused to changes in serum lipid metabolism of broilers.It is determined that heat stress can reduce the content of lysophosphatidylcholine and increase the content of phosphatidylcholine (Guo et al., 2021a).The 16S rDNA amplicon sequencing technology can analyze microbial communities to obtain information on community structure, differences and functions.It is also widely used in the poultry industry, revealing the interaction between microbial communities such as the gut, animal reproduction, growth and development, nutritional health, environmental factors, immunity and disease treatment (Pan and Yu, 2014).Emami et al. (2022) reported that using microbial amplicon analysis, it was found that heat stress can affect the diversity and composition of gut microbiota in broilers, which can provide a basis for developing nutritional strategies to maintain gut microbiota balance and alleviate the negative effects of heat stress on performance and health of broilers.In addition, the multi-omics analysis has shown unique advantages in revealing the underlying molecular regulatory mechanisms.For example, integrated analysis of transcriptome and metabolome was used to determine the critical amino acid metabolic pathway to improve duck eggs (Yan et al., 2023) and lipopolysaccharides could induce the immune stress pathway in broilers (Bi et al., 2022).Hubbard et al. (2019) reported that the combination of RNA-Seq and metabolomic data can identify new changes in gene regulation of broilers affected by heat stress, which reflect changes in pathways that affect metabolite levels.Integrated analysis of metabolome and microbiome has determined that heat stress can increase the relative abundance of harmful microbes in the cecum of broilers and reduce health-related metabolites such as L-malic acid, which can provide a basis for the impact of heat stress on physiological changes and intestinal health in broilers (Liu et al., 2022(Liu et al., , 2022a,b),b). The intestine is a sensitive part of heat stress and an important organ for nutrient absorption and immune regulation in poultry, making it crucial for poultry production (Deng et al., 2023).The cecum is the broiler gastrointestinal tract's most diverse area of microbes and various microbes are crucial in improving growth performance and maintaining physical health (Luo et al., 2021).Previous studies mainly explored the effects of heat stress on broiler intestines from a single omics perspective, including transcriptome, metabolomics, and microbiome (Dridi et al., 2022;Kim et al., 2022;Zhang Y. et al., 2023;Zhang F. et al., 2023;Zhang X. et al., 2023).However, very few studies have comprehensively revealed the heatstress impact on broilers' guts.Therefore, this experiment conducted chronic heat stress treatment on broilers and analyzed the effects of heat stress on the cecum of broilers at the genetic, metabolic, and microbial levels to provide a theoretical basis for subsequent research. Animal ethics The animal use protocol has been reviewed and approved by the Institutional Animal Care and Use Committee of Shanxi Agricultural University.All procedures involving the handling, management, and healthcare of live poultry follow the regulations for the use of experimental animals for scientific purposes, and are implemented in accordance with the Shanxi Agricultural University Ethics Committee (SXAU-EAW-2021C0630) in China. Experiment design and animal feeding Three hundred 1-day-old Arbor Acres broilers were purchased from Xiangfeng Poultry industry, Taigu county, Shanxi province.On the 28th day, 200 broilers with similar weights were selected and randomly divided into a control group (CON) and a heat stress treatment group (HS).There were 5 replicates in each group and had 20 broilers of each replication.All broilers were raised in a three-layer vertical cage, vaccinated as required, and regularly cleaned and disinfected the chicken house.The feeding period was divided into 1-21 days and 22-42 days.The trial period was 14 days.Feeding at 8:00 and 19:00 every day and ad libitum access to water and eating, the broilers were exposed to light for 23 h and dark for 1 h every day, and the temperature of the brood was 34 ± 1°C at 1-3 days, 32 ± 1°C at 4-7 days.After that, it declined by 1°C every day until it was kept at a constant temperature of 21 ± 1°C.On the 28th day, the broilers in the HS were subjected to chronic heat stress treatment until the end of the test.That was, the feeding temperature was raised to 33 ± 1°C at 9:00-17:00 daily, the heat stress treatment lasted for 8 h, and the rest of the time was kept at the same temperature as the control group (21 ± 1°C).The experimental broilers were fed and managed by the national standard GB/T 19664-2005 production technique criterion for commercial broiler.Each group was fed with corn-soybean meal basal diet.The corn-soybean meal formula was prepared according to the National Research Council recommendations.Analyzed nutrient concentrations in the experimental diets are reported in Supplementary Table S1. Growth performance and sample collection Prior to slaughter, broilers were prohibited from eating for 10 h and their body weight was recorded.Recorded the eating feed of broilers on time and calculated the average daily feed intake (ADFI), average daily gain (ADG), and feed conversion rate (FCR = ADFI/ ADG) of each group of broilers at the end of the experiment.The original data of growth performance in Supplementary Table S2. We took cecal samples from each group and washed with pre-cooled saline physiological solution.Then we quickly cooled them in liquid nitrogen, stored at −80°C fridge, and shipped three and six 2 cm 2 cecal samples to Shanghai Meiji Biomedical Technology Co., Ltd. for transcriptome and metabolome sequencing, respectively.In addition, four 2 cm 2 samples of cecal contents were also taken from each group and sent to Novogene Co., Ltd. for 16S rDNA amplicon sequencing. Extraction of total RNA Trizol reagent was used to extract caecum tissue.The Nanodrop 2000 and Agilent 2100 bioanalyzers then detected total RNA, purity and RIN (RNA integrity number). Construction of the cDNA library Took 1 μg of qualified total RNA sample, enriched the mRNA with magnetic beads, broke it up into 300 bp fragments, reversed transcribed the mRNA fragments into cDNA, and synthesized doublestranded cDNA.Added End Repair Mix to complement the end of the double-stranded cDNA from the sticky end to the flat end, added a tail at the 3′ end for ligating the adapter and obtained the final cDNA library after PCR amplification and purification.And then the cDNA library had been sequenced by the Illumina Novaseq 6000 platform. Alignment with the reference genome In order to ensure data quality, the original data was filtered before analysis, and the low-quality data was filtered out to reduce the interference caused by invalid data to obtain clean reads.The qualitycontrolled clean reads were compared with the Ensemble reference genome (reference genome version: GRCg6a)1 to obtain mapped reads for subsequent transcript assembly and expression calculation.In the quantitative analysis of genes using the REEM software, the quantitative indicator was TPM (Transcripts per kilobase million). Gene clustering analysis and screening of differential genes Genes were clustered to observe the effect of heat stress on broiler cecum genes.DESeq2 (Version 1.24.0)software was used for gene expression differential analysis to |Log2 Foldchange| ≥ 1 and Padjust < 0.05 were used as criteria for screening for differential genes.The multiple test correction method is BH.The detailed information of differential genes can be seen in Supplementary Table S3. Differential genes function enrichment In order to analyze the function of differentially expressed genes, we conducted GO and KEGG functional enrichment analysis on differentially expressed genes using Goatools (Version 0.6.5)and R (Version 1.6.2).Padjust < 0.05 was used to evaluate significant enrichment of the GO function and KEGG enrichment analysis.The multiple test correction method is BH. Real-time PCR verification Four differential genes (PDK4, CHAC1, EOMES and SULT1C3) were randomly chosen for the validation tests.β-actin was selected as the reference gene.Using the Primer Premier 5.0 to design primers, the primer information is shown in Supplementary Table S4.The primers were synthesized by Sangon Biotech (Shanghai) Co., Ltd.The relative expression of the different genes between the groups was calculated by the 2 −∆∆CT method, and the GraphPad Prism 8 software was used to plot. Metabolome analysis 2.5.1. Sample pretreatment Accurately weighed 50 mg of the cecal sample in a 2 mL centrifuge tube.Added 400 μl of methanol extract containing 0.02 mg/ml internal standard (2-Chloro-L-phenylalanine) and grinded at 50 Hz and −10°C for 6 min.Extracted at 5°C and 40 kHz for 30 min by ultrasound.Left a −20°C for 30 min.Centrifugation at 13,000 g, 4°C for 15 min, and sampling an equal amount of supernatant for machine analysis. The primary mass spectrometry condition was that the sample was ionized by electrospray.That was, the positive mode spray voltage was 3,500 V, and the negative mode spray voltage was 2,800 V.The mass spectrometry signal was acquired in positive and negative ion scanning modes, and the scanning range was 70-1,050 m/z.Heating temperature 400°C, capillary temperature 320°C, sheath flow velocity 40 arb, auxiliary airflow rate 10 arb, S-Lens voltage 50 V.The resolution MS2 was 17,500, and the Full MS was 70,000. Data preprocessing ProgenesisQi (Waters corporation, Milford, United States) software was used to identify and integrate peaks.The result was a data matrix that can be used for subsequent analysis.MS and MS/MS mass spectrometry information were then combined with metabolic databases HMDB and Metlin to match while identifying metabolites according to secondary mass spectrometry matching scores, normalizing and log-converting data to Log10. Sample variability comparative analysis To analyze the differences between groups, we conducted principal component analysis (PCA) and partial least squares discrimination analysis (PLS-DA) analysis.PCA analysis used the data conversion type as unit variance conversion, with a confidence level of 0.95.PLS-DA analysis adopted the PLS-DA data conversion type of pareto conversion.The PLS-DA confidence level was 0.95, and the number of replacements was 200. Screening differential metabolites and KEGG enrichment The metabolites obtained above were screened for differential metabolites by projecting VIP >1, and FDR < 0.05 as screening criteria.The software used was R (Version 1.6.2).Perform compound classification analysis on differential metabolites.To further clarify the functions of differential metabolites, they were compared to the KEGG database and significantly enriched metabolic pathways were screened by setting Padjust < 0.05 as the standard.The multiple test correction method is BH. Microbial DNA extraction and 16S rDNA amplicon sequencing Microbial genomic DNA was extracted by the CTAB method, and then the purity and concentration of extracted DNA were detected on 1% agarose gel.PCR amplification was performed using diluted genomic DNA as a template.The primers for the 16S V34 region were 341F (CCTAYGGRBGCACAG) and 806R (GGACTACNNGGTATCTAAT; Xiao et al., 2022).PCR products were detected by agarose gel electrophoresis with a concentration of 2%, and the target bands were recovered using the gel recovery kit provided by Qiagen Company.Using NEBNext ® Ultra™ IIDNA Library Prep Kit was used for library construction, and the constructed library was subjected to Qubit and qPCR quantification.After the library was qualified, used the NovaSeq 6000 for machine sequencing.After sequencing, used the DADA2 module in the QIIME2 software to denoise and filter out sequences with an abundance less than 5 in order to obtain the final ASVs (Amplicon sequence variables).Subsequently, the classify-sklearn module in QIIME2 software (qiime2-2020.6)was used to compare the obtained ASVs with the database to obtain species information for each ASV. Bioinformatics analysis Used the QIIME2 software to calculate the Shannon, Simpson, Chao1, and ACE indexes and used the Simpson exponent as a reference to draw a rarefaction curve.The composition of microorganisms was presented using Venn plots, PCA plots, top 10 phylum horizontal species relative abundance histograms, and top 10 genus horizontal species relative abundance histograms.We used the LEfSe software and set a threshold of 4 (LDA = 4) for significant difference species analysis.In addition, to study the function of microorganisms, we conducted PICRUSt functional prediction analysis and compared it to the KEGG database.The above are all using software R (Version 3.5.3). Multi-omics analysis The Pearson correlation coefficient was used to calculate the correlation between differential genes and metabolites, differential genes and ASVs, and differential metabolites and ASVs to obtain the interaction relationship between genes, metabolites, and microorganisms.In addition, by conducting KEGG co-enrichment analysis on differential genes and metabolites, the function of genes and metabolites co-participating in the pathway can be clarified.By comparing the KEGG differential metabolic pathway predicted by PICRUST function with the KEGG pathway enriched by metabolomics, it was found that co-participating pathways can clarify the contribution of microorganisms to metabolic products. Data analysis Using the SPSS 26.0 software, the value of p of 42 days body weight is calculated by a mixed model, with the impact of replication as a random variable and grouping as a fixed factor to evaluate the impact of grouping on body weight.Excluded the impact of replication on selected experimental broilers through mixed model calculations, the independent sample t-test was used to evaluate the significance in the subsequent comparison between the two groups of data.The α diversity index was showed using GraphPad Prism 8 software.The value of p is obtained through calibration using the "BH" method.The value of p < 0.05 is considered statistically significant. Growth performance Using the mixed model to evaluate the impact of replication on 42 days body weight by using replication as a random variable and grouping as a fixed effect.We obtained that replication did not have a significant impact on body weight (value of p > 0.05).Therefore, this eliminates the influence of random variables on the data.From Table 1, it can be seen that compared with the control group, the broilers of the HS showed a significant decrease in body weight and ADG at 42 days (value of p < 0.05), while there were no significant changes in ADFI and FCR (value of p > 0.05). Quality control and reference genome comparison Table 2 shows that the percentage of Q30 bases obtained by quality control is above 93.36%,Q20 is above 97.45, and the average error rate of sequencing bases is below 0.1%.The number of clean reads on the genome accounted for 88.78-90.53%, the alignment rate of multiple alignment positions on the reference sequence was 2.26-2.74%,and the alignment rate of the unique alignment position on the reference sequence was 86.24-87.88%.These results indicate that sequencing results had a low error rate and a high alignment rate of genes on the reference genome, which can be used for subsequent analysis. Gene clustering analysis and screening of differential genes From Figure 1A, it can be seen that there is a significant change in the gene expression level between the CON and HS.According to the screening criteria, we screened 96 differential genes, of which 78 were up-regulated and 18 were down-regulated (Figure 1B). Go enrichment analysis We conducted GO enrichment analysis on differential genes.Figure 1C shows the top 30 GO metabolic entries (Supplementary Table S5).It can be seen from Figure 1C that the enrichment of differential genes in biological process (BP) mainly included immune system process, immune effector process and complement activation.The main activities involved in the enrichment of cellular components (CC) and molecular function (MF) were extracellular region, extracellular space, collagen-containing extracellular matrix, calcium-dependent protein binding and enzyme inhibitor activity. KEGG enrichment analysis To further investigate the function of the differential genes, we conducted the KEGG enrichment analysis (Supplementary Table S6).Figure 1D shows the top 20 differential pathways.Among them, the NF-kB signaling pathway, and cellular senescence were related to regulating heat stress.In addition, although the enriched calcium signaling pathway and intestinal immune network for IgA production are not shown in Figure 1D, they are also enriched and crucial in regulating heat stress. qPCR verification The results of Figure 1E show that the qPCR results are consistent with the trend of transcriptome results and have a high similarity, indicating that the transcriptome data are reliable and accurate. PCA and PLS-DA analysis For the analysis of anions and cations, Figures 2A,B show the distribution of cations and anions between PCA analysis groups, respectively, and it could be found that the differences between groups were significant.Although PCA analysis can reveal differences between sample groups, this algorithm has limitations.Therefore, to further verify the differences between groups, we selected the PLS-DA test (Figures 2C-F).Figures 2C,E were the cation and anion distributions, respectively, and it could be found that the differences between the groups were evident.The distance of the samples in the group was closer.At the same time, the PLS-DA replacement test was carried out.And Figures 2D,F were, respectively, cation and anion replacement test results.The R 2 and Q 2 of the anion and cation were 0.9182, −0.1776, 0.9093 and −0.6454, respectively.Theoretically, the closer R 2 and Q 2 were to 1, the more stable the model, the better the prediction ability, and the high confidence of the results.The above results indicated that metabolite differences between were groups were significant. Screen for differential metabolites Six hundred and nineteen differential metabolites were screened, of which 47 metabolites of known structures were screened (Supplementary Table S7).Figure 3A shows that 13 metabolites of known structures were up-regulated (4 and 9 anionic and cation metabolites, respectively), and 34 metabolites were down-regulated (19 and 15 anionic and cation metabolites, respectively) among the metabolites of known structures. Differential metabolic species class analysis The superclass classification hierarchy classifies the distribution of differential metabolites in the HMDB database, and Figure 3B shows the HMDB classifications.Among them, organic acids and derivatives accounted for 27.78%, lipids and lipid-like molecules accounted for 25.00%, organoheterocyclic compounds accounted for 16.67%, nucleosides, nucleotides, and analogues accounted for 16.67%, organic oxygen compounds accounted for 8.33%, and benzenoids accounted for 5.66%. KEGG enrichment analysis A total of 19 KEGG pathways are enriched (Supplementary Table S8; Figure 3C).We found that the number of metabolites enriched in the purine metabolism pathway was the largest.Further, we analyzed the pathways with significant differences.Eight differential pathways were screened for KEGG enrichment analysis of differential metabolites (Figure 3D), including the FoxO signaling pathway, mTOR signaling pathway, D-arginine and D-ornithine metabolism, arginine biosynthesis, aminoacyl-tRNA biosynthesis, arginine and proline metabolism, ABC transporters and purine metabolism pathway, all of which play a direct or indirect role in regulating heat stress. α Diversity analysis Figure 4B shows that heat stress increases the abundance of microbial communities and the types of low-abundance species.Randomly selected a certain amount of data from the sample and calculated the rarefaction curve based on the Simpson index.As shown in Figure 4A, as the curve tends to flatten out, the sequencing data volume is more reasonable, and more data volume will not affect α diversity index has a significant impact. Microbial composition analysis The principal component analysis (PCA) showed a variation of 27.79% for PC1 and 17.8% for PC2 (Figure 4C).After filtering out the low-quality data, 478 and 1,358 ASVs were observed in CON and HS, respectively, and 464 ASVs in both groups (Figure 4D).Analyzing their species, Bacteroidota, Firmicutes, and Proteobacteria were the dominant species in the cecum of 42-day-old broilers.Meanwhile, heat stress increased the relative abundance of Proteobacteria in broilers and decreased the relative abundance of Bacteroidota and Firmicutes (Figure 4E).At the genus level, heat stress reduced the relative abundance of Bacteroides, Lactobacillus, and Alistipes, and increased the relative abundance of Fusobacterium, Thiobacillus, and PHOS-HE36 (Figure 4F). Analysis of significantly different microbial communities Analyze microorganisms with statistical differences between groups using the LEfSe software.Figure 5A shows 31 biomarkers at different classification levels in the CON and HS.Within the HS group of p_Proteobacteria, p_Fusobacteriota, and o_Burkholderiales were significantly higher than the CON.In contrast, the levels of the g_ Alistipes, f_Rikenellaceae, g_Lactobacillus, and g_Bacteroides were significantly lower than those of the CON.The evolutionary branch indicates (Figure 5B) that the significant differences in microorganisms in the HS are mainly concentrated in p_Proteobacteria, p_ Fusobacteriota, p_Chloroflexi.In contrast, the CON mainly concentrates on c_Bacteroidales and f_Lactobacillaceae. Functional analysis of cecum microorganisms The functional prediction of the cecal microbiota is conducted on the PICRUSt platform.By comparing to the KEGG database at level 2 (Figure 5C), the gut microbiota of AA broilers is mainly involved in carbohydrate metabolism, membrane transport, amino acid metabolism, nucleotide metabolism, translation, replication and repair, energy metabolism, metabolism of cofactors and vitamins, poorly characterized, cellular processes and signaling.To further search for differential metabolic pathways, using LEfSe analysis, set LDA = 3.As shown in Figure 5D, the KEGG database at level 3 shows that HS microorganisms are mainly concentrated in fatty acid metabolism, while CON microorganisms are concentrated in such as purine metabolism. Discussion As an important economic indicator of meat and poultry, the quality and performance of poultry meat often directly determine the level of breeding efficiency in production, and with the development of the economy, people are increasingly favoring chicken with good taste, good meat quality and richer nutrition (Ma et al., 2022).Poultry exposure to heat stress can disrupt thermoregulation and homeostasis, reduce poultry performance, health and welfare, and lead to weight loss or even negative growth, low immunity, and even death (Mujahid et al., 2009;Kim et al., 2021;Sarsour et al., 2022).Growth performance is an important indicator for evaluating whether the production efficiency of poultry meets people's expectations.The decrease in growth performance caused by heat stress is related to decreased food intake (Peng et al., 2023).The result of this study indicated that heat stress reduced the weight and ADG of broilers but had no effect on ADFI and FCR.The results of the current study are inconsistent with Analysis of microbial changes in the microbiome.(A) Rarefaction curve, (B) α diversity indexes, the screening criteria are value of p < 0.05.(C) PCA plot, (D) Venn plot, top 10 species at the (E) phylum level, and the relative abundance of the top 10 species at the (F) genus level.The cecal contents of 42-day-old broilers was collected for metabolomic analysis (n = 4).CON: broilers were raised in an environment of 21 ± 1°C.HS: broilers were raised in an environment of 33 ± 1°C (9:00-17:00) at 28 days, subjected to heat stress treatment for 8 h, and remained at an appropriate temperature (21 ± 1°C) for the other time as in CON. previous studies on ADFI and FCR.Previous studies have found that high-temperature environments reduced broilers' body weight, ADG, and ADFI (Deng et al., 2023).Sun S. et al. (2023) conducted heat stress treatment on 28-day-old broilers and found a significant reduction in ADFI, ADG, and FCR in 42-day-old broilers.In addition, Yilmaz and Gul (2023) found that heat stress reduced the ADG in 42-day-old broilers, but had no significant effect on the FCR.The above results indicate that this study demonstrates the negative effects of heat stress on broilers, indicating that a heat stress model has been successfully established.This may be because when the temperature recovers, periodic heat stress can lead to birds overeating, thereby weakening the impact of heat stress on ADFI.In addition, the impact of heat stress on the growth performance of broilers not only depends on feed intake, but also includes other factors such as physiological, biochemical, hormonal changes, breeds of broiler, duration of heat stress, and temperature of heat stress treatment (Al-Abdullatif and Azzam, 2023).Research has shown that heat stress can damage growth performance by reducing protein and nutrient digestibility; Insulin LEfSe analysis and PICRUSt functional prediction.(A) LDA bar chart, the length of the bar chart represents the impact of different species, with the LDA = 4. (B) Evolutionary branch chart, each small circle at different classification levels represents a classification at that level, and the diameter of the small circle is proportional to the relative abundance.Species with no significant differences are uniformly colored in yellow, and differential species are colored according to the group.(C) PICRUSt function predicts the top 10 stacking maps at the level 2. (D) PICRUSt function prediction at the level 2 (LDA = 3).The cecal contents of 42-day-old broilers was collected for metabolomic analysis (n = 4).CON: broilers were raised in an environment of 21 ± 1°C.HS: broilers were raised in an environment of 33 ± 1°C (9:00-17:00) at 28 days, subjected to heat stress treatment for 8 h, and remained at an appropriate temperature (21 ± 1°C) for the other time as in CON.growth factor in the Endocrine system is the main regulator of muscle metabolism, participating in all stages of muscle formation and muscle regeneration, which can increase protein synthesis and promote differentiation, while heat stress will affect the secretion of insulin growth factor and thus reduce protein synthesis (Nawaz et al., 2021). Research shows that transcriptome can help researchers analyze which pathways and genes are activated in response to the stressor (Zhang Y. et al., 2023;Zhang F. et al., 2023;Zhang X. et al., 2023).Compared to other parts of the intestine, the cecum of broilers plays a more important role in host defense (Khan and Chousalkar, 2020).Therefore, this is more important for studying the effects of heat stress.From the transcriptome results, 96 differential genes were identified, 78 differential genes of which were up-regulated and 18 differential genes were down-regulated.PDK4 (pyruvic acid dehydrogenase 4), as a gene with significant differences, plays an important role in regulating energy homeostasis metabolism, glycolysis, and fat decomposition (Honda et al., 2017;Wen et al., 2021;Forteza et al., 2023).In mice with ischemic stroke, it was found that the cecum metabolism was disordered, and the PDK4 related to fatty acid metabolism was up-regulated, which may lead to the reduction of steroid metabolic process activity (Ge et al., 2022).Research has shown that PDK4 was previously identified as a differential gene in multiple chicken heat stress experiments (Wang et al., 2021).In the experiment on high-altitude-stressed Tibetan sheep, glycolysis can increase ATP content by up-regulating the expression of PDK4, providing energy for resisting hypoxia stress (Wen et al., 2021).As is well known, high temperatures can damage protein stability and lead to dysfunctional cell function (Mackei et al., 2021).FKBP10 is a member of the FK113 binding protein gene family and is involved in many functions, including protein folding and repair in response to heat stress, which is necessary to maintain natural peptides and prevent protein aggregation (Akbarzadeh et al., 2018).Our previous research found that heat stress can affect lipid metabolism and increase cholesterol content in broilers (Zhang L. et al., 2022).Studies have found that LBP (lipopolysaccharide binding protein) has been used as an indicator of the impairment of intestinal barrier function, and its level can reflect the degree of intestinal leakage (Vancamelbeke and Vermeire, 2017;Wu et al., 2023).Intestinal barrier dysfunction can further contribute to the occurrence of alcoholic hepatitis by acting on the gut-liver axis, triggering inflammatory cascade reactions, and aggravating liver inflammation and LBP levels (Tilg et al., 2016).Heat stress can lead to an increase in LBP levels, which is consistent with our trend (Wickramasinghe et al., 2023).GO results indicated that differential genes were mainly enriched in immune processes.The KEGG results further indicated that differential genes were mainly enriched in other pathways, such as the NF-kB signaling pathway, calcium signaling pathway, and intestinal immune network for IgA production.These pathways play an important role in the intestinal injury.It is reported that heat stress can result in gut microflora dysbiosis, cause bacterial translocation, and thus induce the production of intestinal endotoxin.These endotoxins can activate TLR4-mediated reactions, including initiation of the NF-κB signaling pathway (Tang et al., 2021).NF-κB is a major transcription factor involved in inflammatory diseases, which can respond to heat-stress stimuli and activate the NF-κB signaling pathway in broilers (Xu et al., 2023), thereby inducing tissue damage (Liu W. C. et al., 2021;Liu Y. R. et al., 2021).NF-κB acts downstream of TLR4 and other immune receptors, increasing the excessive production of IL-6, IL-1β, and TNF-α leads to the occurrence of inflammatory responses in response to heat stress (Vallabhapurapu and Karin, 2009).Liu et al. (2022Liu et al. ( , 2022a,b) ,b) found that chronic heat stress can enhance the NF-κB signaling pathway and promote the occurrence of liver inflammation in broilers.The intestinal immune network for IgA production have been confirmed to play an important role in immunity (Zhang et al., 2018).The intestines are the largest lymphoid tissue, and intestinal immunity can produce many non-inflammatory IgA antibodies, the first line of defense against heat stress.Multiple cytokines (TGF-β, IL-10, IL-4, IL-5 and IL-6) are essential for B cells to differentiate into IgA plasma cells (Nagatake et al., 2019;Yang et al., 2021).IgA primarily functions in the intestinal cavity through secretory immunoglobulin A (SIgA), which maintain intestinal mucosal homeostasis and prevent harmful substances from adhering and entering the intestinal barrier (Lammers et al., 2010).Research has shown that chronic heat stress can damage intestinal immune function by promoting the inflammatory response and reducing IgA secretion (Yang et al., 2019).Heat stress can also affect the calcium signaling pathway, leading to mitochondrial oxidative stress and severe calcium overload, damaging mitochondrial structure and function, and even leading to apoptosis (Coble et al., 2014;Zhang W. et al., 2022;Yao et al., 2023). Metabolomics research can understand changes in the metabolism of organisms, helping researchers better understand how chicken products are affected by the external environment (Zhang Y. et al., 2023;Zhang F. et al., 2023;Zhang X. et al., 2023).The fermentation products produced by cecal microorganisms have a positive impact on intestinal health, and numerous cecal metabolites play a crucial role in maintaining intestinal barrier function (Liu et al., 2022(Liu et al., , 2022a,b),b).Therefore, this is more important for studying the effects of heat stress on changes in cecal metabolites in broilers.According to the metabolomics results, 619 differential metabolites were identified, with 47 metabolites known structures, of which 13 metabolites were up-regulated and 34 were down-regulated.As one of the most differential metabolites of metabolome, ascorbyl palmitate has the function of clearing reactive oxygen species (ROS) and protecting DNA damage when coping with stress, and plays a key role in KEGG co-enrichment analysis.Transcriptome (n = 3) and metabolome (n = 6) co-enrichment pathways in the cecum of 42-day-old broilers.Liu et al. 10.3389/fmicb.2023.1244004Frontiers in Microbiology frontiersin.orgprotecting cells and cell membranes from oxidative damage (Xiao et al., 2014).In addition, ascorbyl palmitate is believed that increasing the amount of Vitamin C will increase the resistance to oxidative stress, thus increasing the resistance to certain diseases (Nieva-Echevarría et al., 2021).Therefore, the oxidative damage caused by heat stress also reflects the reduction of ascorbyl palmitate content.Zhang Y. et al. (2023), Zhang F. et al. (2023), and Zhang X. et al. (2023) showed that the increase of ADP ribose content can identify damaged DNA and further activate the base excision repair mechanism to participate in heat stress regulation.This also confirms the upregulation of ADP ribose metabolite content in this study.KEGG enrichment results showed that differential metabolites were mainly enriched in the purine metabolism and ABC transporters.Purine is involved in DNA and RNA functions and is essential for cell survival and proliferation (Pedley and Benkovic, 2017).Heat stress will lead to oxidative stress in organisms, and purine metabolism is the basic reaction of oxidative stress and the imbalance between purine remedy and de novo synthesis pathway will lead to the production of ROS (Yu et al., 2021;Tian et al., 2022).Besides, the metabolism of intestinal microorganisms will affect the content change of metabolites.In this study, the differential pathway of microbial function prediction also includes purine metabolism, which also explains why it is enriched in metabolome data.ABC transporter is a transmembrane protein that can transport many molecules across the cell membrane (Yu et al., 2021).As a member of the ABC transporters, ABCB10 deficiency may lead to mitochondrial oxidative damage and ROS production (Liesa et al., 2012).ROS exceeding the tolerance threshold level of the body can damage the body's antioxidant system, leading to a decrease in antioxidant enzyme activity (Rahman and Rahman, 2012).Microorganisms living in the gastrointestinal tract can affect poultry's nutrition, physiology and intestinal development (Shang et al., 2018).Research has shown that heat stress can increase microbial communities' richness and abundance indicators, increase OUTs (Operational taxonomic units), and thus increase the Chao1, Shannon, Simpson, and ACE indexes (Goel et al., 2022a;Liu et al., 2022Liu et al., , 2022a,b),b).This is consistent with the results of this study, which found that heat stress increased the number of OTUs and α diversity index, which may be due to an increase in harmful microbial communities.At the phylum level, heat stress increased the relative abundance of Proteobacteria and decreased the relative abundance of Bacteroidota and Firmicutes.At the genus level, heat stress reduced the relative abundance of the Alistipes and increased the relative abundance of the Fusobacterium.LEfSe analysis showed that p_ Proteobacteria and p_Fusobacteriota were the differential microbiota under heat stress.At the same time, the relative abundance of g_ Alistipes and f_Rikenellaceae was significantly lower than that of the CON.Functional prediction analysis at levle 2 showed that the gut microbiota of AA broilers was mainly involved in carbohydrate metabolism, membrane transport, amino acid metabolism, replication and repair.This is consistent with the Li et al. report's functional prediction results (Yi et al., 2023).At the level 3 KEGG metabolic pathway showed that fatty acid metabolism was enhanced in the HS.Research has shown that fatty acid metabolism is an important mechanism closely related to energy homeostasis under heat stress (Lim et al., 2022).This is similar to the report of Jastrebski et al. (2017) that heat stress can increase the expression of enzymes related to fat metabolism, thereby improving fatty acid metabolism.In addition, Goel et al. (2022b) research found that heat stress can reduce the number of Bacteroidetes in the ileum of broilers.Bacteroidetes are generally considered to maintain complex and beneficial relationships with the host, including fermenting carbohydrates to produce volatile fatty acids (VFAs) as an energy source for host utilization and are positively correlated with growth performance (Zhu et al., 2021).In addition, Firmicutes and Proteobacteria are related to the fermentation of undigested dietary components.Firmicutes contribute to the production of the polysaccharide and butyrate (Wu et al., 2018).Notably, a high proportion of Proteobacteria indicates intestinal ecological imbalance and is associated with the pathogenesis of many diseases, such as diarrhea, inflammatory bowel disease, and colitis (Wu et al., 2021).Alistipes belonging to the Bacteroidota is considered a relatively new bacterial genus that has protective effects on specific diseases, including liver fibrosis, colitis, cancer immunotherapy and cardiovascular diseases (Parker et al., 2020).It is also a major producer of short chain fatty acids in bacteria (Qi et al., 2019).They have the characteristics of glycolysis and proteolysis and produce acetic acid by producing fibrinolysis, digesting gelatin and fermenting carbohydrates (Zhu et al., 2019).As a member of Fusobacteriota, the increase in abundance of Fusobacterium can act as a pro-inflammatory factor to promote the occurrence of intestinal tumors (Patra and Kar, 2021).In addition, Fusobacterium metabolites may make the tumor microenvironment more comfortable over time by directly promoting tumor cell proliferation, vascular growth or immune cell infiltration (Kostic et al., 2013).Research found that the Rikenellaceae was related to metabolism and gastrointestinal health in the body and a large number of Rikenellaceae had the potential to protect against cardiovascular and metabolic diseases related to visceral fat and were potential biomarkers of healthy aging and longevity (Pin Viso et al., 2021;Tavella et al., 2021;Wang et al., 2022).Therefore, the reduction of the relative abundance of Rikenellaceae may lead to the shortening of cell life, which also confirms that differential genes are enriched in cellular senescence. There is a strong interdependence between gut microbiota, genes and metabolites.Therefore, multi-omics analysis can help us analyze specific mechanisms of action.Pearson correlation analysis showed that ornithine was negatively correlated with CALB1 and positively correlated with SULT1C3, GSTT1L and ASV2 (g_Lactobacillus).PE was negatively correlated with CALB1 and CHAC1.Studies have shown that amino acids serve as components of synthetic proteins and as important physiological and behavioral regulators, such as regulating stress responses (Chowdhury et al., 2021).Long-term heat stress can reduce the content of most free amino acids (Chowdhury et al., 2021), such as a decrease in ornithine content in the plasma of laying hens exposed to long-term heat stress (Chowdhury et al., 2014).L-ornithine is one of the metabolites in the urea cycle, proline, glutamate, arginine, and polyamine metabolism.It can stimulate the secretion of growth hormone by the pituitary gland and promote the breakdown and metabolism of proteins, sugars and fats (Xiong et al., 2016).Meanwhile, under heat stress conditions, a decrease in ornithine content may weaken the stimulation of glutathione metabolism, thereby reducing the antioxidant capacity of broilers (Xiong et al., 2016).SULT1C3, which has sulfotransferase activity, plays a role in larger biomolecules, including proteins and carbohydrates, and is vital in maintaining tissue structure and cellular signaling (Rondini et al., 2014).GSTT1L is believed to be related to glutathione metabolism in broilers, participating in antioxidant activity, improving heat tolerance and aging (Zhang et al., 2017;Liu et al., 2019).However, research has found that supplementing Lactobacillus probiotics can improve the antioxidant capacity of broilers (Dashti and Zarebavani, 2021).The above results confirm the findings of this study.Therefore, the downregulation of these genes and microorganisms reflects a decrease in antioxidant levels and heat tolerance in broilers.Heat stress is related to the secretion of many hormones (such as estrogen, glucocorticoid, and catecholamine), which may affect the expression of the CALB1 in tissues in different ways (Ebeid et al., 2012).CALB1, as an intracellular transporter protein, can transport Ca 2+ and act as a Ca 2+ sensor to prevent increased intracellular Ca 2+ concentration from causing toxicity (O'Toole, 2011).In addition, CALB1 is a rate-limiting step for epithelial cells to absorb calcium, thus CALB1 expression is highly correlated with intestinal calcium absorption efficiency (Valable et al., 2020).Bahadoran et al. (2018) found that heat stress can increase the expression of the CALB1 in the uterus of laying hens, which helps to increase the resistance of uterine cells to the harmful effects of heat stress.As the main source of phospholipids, PE plays an important role in the integrity of cells and organelle membranes in broilers (Guo et al., 2021b).Research shows that heat stress leads to the decrease of phospholipid level in broilers, which may be due to the damage to cell membrane caused by heat stress by increasing the activity of phospholipase A2 and promoting the decomposition of phospholipids (Guo et al., 2021b).The overexpression of CHAC1, which is induced in response to endoplasmic reticulum stress, will lead to large glutathione consumption (Aquilano et al., 2014;He et al., 2021).In addition, the high expression of CHAC1 in male broilers indicates that the degradation rate of glutathione is higher than average, which may play an important role in oxidative stress (Brothers et al., 2019).This is consistent with our research results.Therefore, the broiler will upregulate the expression of CALB1 and CHAC1 by responding to cell damage caused by heat stress, thereby increasing the body's resistance.Further research has found that KEGG co-enrichment of differential genes and differential metabolites indicated such as folate biosynthesis, the FoxO signaling pathway, and the mTOR signaling pathway.According to reports, folic acid not only participates in nutrient metabolism but also has free radical scavenging and ROS activity.Dietary supplementation with folic acid can improve the antioxidant performance and immune status of broilers under heat stress, which may be due to the role of folic acid in regulating protein metabolism (Gouda et al., 2020).The enriched ALPL in folate biosynthesis can involve in inflammatory response, bone growth, and bone calcium metabolism (Sharma et al., 2014).High-temperature environments may lead to inflammatory reactions and disrupt bone calcium metabolism in poultry.Therefore, we speculate that upregulation of the ALPL can maintain bone health and reflect the resistance of poultry to high temperatures and pathogens.The mTOR signaling pathway is a key nutrient perception pathway that regulates cell metabolism and lifespan in response to various changes in stress, growth factors, and cell energy levels (Su et al., 2019).A previous study suggested that bovine mammary epithelial cells may resist heat stress damage by enhancing the absorption and metabolism of intracellular amino acids and activating the mTOR signaling pathway (Fu et al., 2021).In addition, heat stress significantly upregulated the expression of mTOR signaling pathway related genes, which is consistent with our research results (Fu et al., 2021).L-arginine, as a metabolite significantly enriched in the mTOR signaling pathway, is essential for maintaining growth, reproduction, and immunity (Murakami et al., 2012).Research has found that supplementing arginine can enhance the development of the small intestine and nutrient absorption (Abdulkarimi et al., 2019).Therefore, the decrease in L-arginine content also reflects that heat stress can cause damage to the body and reduce immune function.The FoxO signaling pathway is involved in various cellular functions, including cell proliferation, apoptosis, autophagy, oxidative stress, and metabolic disorders (Xing et al., 2018).According to reports, under environmental stress conditions, the FoxO signaling pathway in fish is significantly enriched, which is consistent with our research results (Shang et al., 2023;Sun J. et al., 2023).The ROS produced by heat stress can regulate the expression of FoxO at the levels of transcription, protein activation, phosphorylation, and acetylation and overactivation and overexpression of FoxO can lead to the occurrence of various diseases (Liu et al., 2023).In addition, FoxO can also affect the expression of the Bcl-2 protein family, stimulate the expression of death receptor ligands and tumor necrosis factor related apoptosis inducing ligands, and induce cell death through mitochondrial mediated endogenous pathways and death receptor mediated exogenous pathways (Fu and Tindall, 2008). In summary, our study found that heat stress led to decreased growth performance, intestinal oxidative damage, and antioxidant capacity in broilers, a process related to the complex regulation of genes, metabolites, and microorganisms.This will provide a theoretical reference for the poultry industry to improve the problem of heat stress. Conclusion In this study, heat stress reduced the body weight and ADG, altered the expression of purine metabolism, calcium signaling pathway, intestinal immune network for IgA production and FoxO signaling pathway, and increased the expression of cecal microbiota α diversity index, the relative abundance of Proteobacteria and Alistipes decreased the relative abundance of Bacteroidetes and Firmicutes.PICRUSt prediction showed that the cecum microorganisms of AA broilers in HS were mainly enriched in fatty acid metabolism, while those in CON were mainly enriched in the purine metabolism pathway.The multi-omic analysis found that the KEGG co-participating pathway of differential genes, metabolites and microorganisms was the purine metabolism.Pearson correlation analysis showed that ornithine was positively correlated with SULT1C3, GSTT1L and g_ Lactobacillus, and negatively correlated with CALB1.L-arginine and PC were negatively correlated with IFI6 and positively correlated with SULT1C3.PE was negatively correlated with CALB1 and CHAC1, and positively with g_Alistipes. Data availability statement The datasets presented in this study can be found in online repositories.The names of the repository/repositories and accession number(s) can be found below: NCBI and MetaboLights database -Transcriptome sequencing (PRJNA987758), 16S rDNA sampling sequencing (PRNA972642), LC-MS (MTBLS8078). FIGURE 1 FIGURE 1Transcriptome analysis and qPCR validation results.(A) Gene clustering heatmap, (B) volcano map analysis of differential genes, up, down and nosig stand for up-regulated, down-regulated and insignificant differentially insignificant genes, respectively.The criteria for screening differential genes are |Log2 Foldchange | ≥ 1 and Padjust < 0.05.(C) GO enrichment analysis, BP, CC and MF stand for biological process, cellular components and molecular function, respectively.The standard for screening GO items with differences is Padjust < 0.05.(D) KEGG enrichment analysis.The standard for screening KEGG pathways with differences is Padjust < 0.05.(E) verification of differential genes expression by qPCR.The cecal samples of 42-day-old broilers was collected for transcriptome analysis (n = 3).CON: broilers were raised in an environment of 21 ± 1°C.HS: broilers were raised in an environment of 33 ± 1°C (9:00-17:00) at 28 days, subjected to heat stress treatment for 8 h, and remained at an appropriate temperature (21 ± 1°C) for the other time as in CON. FIGURE 2 FIGURE 2 PCA and PLS-DA analysis of metabolomics.(A) Cation PCA diagram.(B) Anion PCA diagram.(C) Cation PLS-DA diagram.(D) Cation PLS-DA displacement test chart, the abscissa represents the displacement retention of the displacement test (the proportion consistent with the order of Y variables of the original model, the point with the displacement retention of 1 is the R 2 and Q 2 values of the original model), the ordinate represents the values of R 2 (blue dot) and Q 2 (red triangle) displacement test, and the two dashed lines represent the regression lines of R 2 and Q 2 , respectively (E) Anion PLS-DA diagram.(F) Anion PLS-DA replacement test chart.The cecal samples of 42-day-old broilers was collected for metabolomic analysis (n = 6).CON: broilers were raised in an environment of 21 ± 1°C.HS: broilers were raised in an environment of 33 ± 1°C (9:00-17:00) at 28 days, subjected to heat stress treatment for 8 h, and remained at an appropriate temperature (21 ± 1°C) for the other time as in CON. FIGURE 3 FIGURE 3 Analysis of metabolomics results.(A) Volcano plot.The criteria for screening differential metabolites are VIP > 1 and FDR < 0.05.(B) HMDB classification chart, (C) top 20 KEGG enrichment analysis.The standard for screening differential metabolic pathways is Padjust < 0.05.(D) Significant difference KEGG enrichment analysis.The standard for screening differential metabolic pathways is Padjust < 0.05.The cecal samples of 42-day-old broilers was collected for metabolomic analysis (n = 6). TABLE 1 Effects of heat stress on growth performance of broilers. 1 CON = broilers were raised in an environment of 21 ± 1°C; HS = broilers were raised in an environment of 33 ± 1°C, treated with heat stress for 8 h per day, and maintained an appropriate temperature (21 ± 1°C) for the other time.All measurements were expressed as mean ± SEM (n = 8).The value of p of 42 days BW is calculated by a mixed model, with the impact of replication as a random variable and grouping as a fixed factor to evaluate the impact of grouping on body weight.The p-values of ADG, ADFI and FCR is calculated by the independent sample t-test.And all p-values have been corrected using the "BH" method. TABLE 2 Sequencing data and alignment information. 1 CON = broilers were raised in an environment of 21 ± 1°C; HS = broilers were raised in an environment of 33 ± 1°C, treated with heat stress for 8 h per day, and maintained an appropriate temperature (21 ± 1°C) for the other time.
v3-fos-license
2016-05-12T22:15:10.714Z
2014-12-22T00:00:00.000
14707988
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0115382&type=printable", "pdf_hash": "d039baadb4c23fc602a2ff437e09b3a23d466059", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:145", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "sha1": "d039baadb4c23fc602a2ff437e09b3a23d466059", "year": 2014 }
pes2o/s2orc
Confirmatory Factor Analysis of the WHO Violence Against Women Instrument in Pregnant Women: Results from the BRISA Prenatal Cohort Background Screening for violence during pregnancy is one of the strategies for the prevention of abuse against women. Since violence is difficult to measure, it is necessary to validate questionnaires that can provide a good measure of the phenomenon. The present study analyzed the psychometric properties of the World Health Organization Violence Against Women (WHO VAW) instrument for the measurement of violence against pregnant women. Methods Data from the Brazilian Ribeirão Preto and São Luís birth cohort studies (BRISA) were used. The sample consisted of 1,446 pregnant women from São Luís and 1,378 from Ribeirão Preto, interviewed in 2010 and 2011. Thirteen variables were selected from a self-applied questionnaire. Confirmatory factor analysis was used to investigate whether violence is a uni-or-multidimensional construct consisting of psychological, physical and sexual dimensions. The mean-and-variance-adjusted weighted least squares estimator was used. Models were fitted separately for each city and a third model combining data from the two settings was also tested. Models suggested from modification indices were tested to determine whether changes in the WHO VAW model would produce a better fit. Results The unidimensional model did not show good fit (Root mean square error of approximation [RMSEA]  = 0.060, p<0.001 for the combined model). The multidimensional WHO VAW model showed good fit (RMSEA = 0.036, p = 0.999 for the combined model) and standardized factor loadings higher than 0.70, except for the sexual dimension for SL (0.65). The models suggested by the modification indices with cross loadings measuring simultaneously physical and psychological violence showed a significantly better fit compared to the original WHO model (p<0.001 for the difference between the model chi-squares). Conclusions Violence is a multidimensional second-order construct consisting of psychological, physical and sexual dimensions. The WHO VAW model and the modified models are suitable for measuring violence against pregnant women. Introduction The expression ''violence against women'' involves complex, dynamic and historically determined phenomena. Abuse of women represents gender violence produced by unequal power relations, reflecting primacy of males over females [1][2][3]. Highly prevalent, violence against women has been pointed out as a phenomenon that is difficult to measure. There is variation on what types of acts are considered violent by a particular woman or a group of women. There is also various definitions and types of violence and/or methodological diversity across studies [4,5]. The use of different terminologies to express the various forms of abuse, types of study, places and periods of women's life when they are interviewed, sample size, screening instruments, perpetrators and modes of questionnaire application, among other aspects, have impaired the comparability of the results of different investigations regarding violence against women [6][7][8][9]. In order to minimize the methodological problems and to permit transcultural comparisons, the World Health Organization carried out the study Violence Against Women (WHO VAW) [2]. Thirteen questions were elaborated to investigate the psychological, physical and sexual types of violence and were included in the questionnaire WHO Multi-country Study on Women's Health and Life Events. More than 24,000 women aged 15 to 49 years were interviewed at 15 locations in 10 countries between 2000 and 2003, corresponding to random samples representative of the populations [10]. The WHO VAW instrument, composed of 13 questions measuring psychological (four), physical (six) and sexual (three) violence, showed good internal consistency, indicating that it provides a reliable and valid measure of these types of violence [10]. In Brazil, the WHO VAW questionnaire was validated using data from the city of São Paulo (1,172 women) and from 15 municipalities located in the Wooded Zone of Pernambuco (1,473 women). Exploratory factor analysis showed that this instrument is suitable for the estimation of gender violence perpetrated by an intimate partner, with high internal consistency and capacity to discriminate between emotional, physical and sexual violence within different social contexts [11]. Another study also used exploratory factor analysis to validate the WHO VAW questionnaire in a random sample of 573 Swedish women interviewed at the age of 18 to 65 years, and concluded that this screening instrument has good construct validity and internal consistency. The investigators pointed out the lack of studies at the international level with similar objectives, and only cited the Brazilian study of WHO VAW validation [12]. However, exploratory factor analysis analyzes the pattern of correlations between the variables investigated and uses these patterns to group them into factors. Model fit is not evaluated and it is not possible to test the hypothesis that a certain set of relationships between the observed variables and the proposed underlying construct exists. In turn, confirmatory factor analysis is a technique of multivariate statistical analysis that permits the investigator to analyze the pattern of correlations between the observed variables (or indicators) and to test hypotheses, in addition to proposing alternative models to the initial one [13,14]. Confirmatory factor analysis for the validation of the WHO VAW instrument has not been used yet and no studies were detected validating this instrument during the gestational period with the use of the self-applied questionnaire. Thus, the objective of the present study was to analyze the psychometric properties of the WHO VAW instrument in a sample of pregnant women in order to determine whether violence is a uni-or-multidimensional construct consisting of psychological, physical and sexual dimensions, using confirmatory factor analysis. Methods The present investigation is part of the Brazilian Ribeirão Preto and São Luís Birth Cohort Studies (BRISA in the Portuguese acronym), carried out by the Federal University of Maranhão (UFMA) and by the Faculty of Medicine of Ribeirão Preto, University of São Paulo (FMRP/USP) in two municipalities with contrasting socioeconomic indicators: São Luís (State of Maranhão) and Ribeirão Preto (State of São Paulo). The study is part of a large research project, aimed at evaluating risk factors for preterm birth in a convenience sample of pregnant women selected during the first 20 weeks of pregnancy. The municipalities of Ribeira˜o Preto and Sa˜o Luı´s The municipality of Ribeirão Preto is located in the state of São Paulo, in the Southeast region of Brazil. In 2010, its population was 604,682 inhabitants, with a 99.72% urbanization rate and a mean per capita family income of R$ 1,314.04 (approximately 728 American dollars) [15]. In this city, in 2011, 77.3% of all pregnant women attended at least seven prenatal care visits [16]. The municipality of São Luís, capital city of the state of Maranhão, is located in the Northeast region. In 2010, its population was 1,014,837 inhabitants, with a 94.5% urban population rate and a mean per capita family income of R$ 805.36 (approximately 446 American dollars) [17]. In this city, in 2011, 41.4% of the women giving birth to liveborn infants attended seven or more prenatal visits [18], a lower percentage than that observed for Ribeirão Preto. Participants and sample This was a convenience sample due to the impossibility of obtaining a random sample representative of the population of pregnant women in São Luís and Ribeirão Preto. Pregnant women users of prenatal outpatient clinics of public and private hospital maternities were registered for interview to be held from the 22nd to the 25th week of gestational age. Inclusion criteria were to have performed the first ultrasound exam at less than 20 weeks of gestational age and to intend to give birth at one of the maternities in the municipality where the prenatal interview was held. Multiple pregnancies were excluded. From February 2010 to June 2011, in São Luís, 1,447 pregnant women participated in the study at the Clinical Research Center (CEPEC in the Portuguese acronym) of the Federal University of Maranhão (UFMA in the Portuguese acronym). One woman was excluded because she did not fill the selfapplied questionnaire, leaving 1,446 cases for analysis. In Ribeirão Preto, the sample consisted of 1,400 pregnant women whose data had been collected from February 27, 2010 to February 12, 2011. Data for 1,378 women were used since 22 women did not have complete information about violence. Data collection and storage Two questionnaires were used for data collection: a) the Self-Applied Prenatal Questionnaire, to be read and answered by the pregnant women, and b) the Prenatal Interview Questionnaire, applied by interviewers. The 13 questions of the WHO VAW for the screening of violence against pregnant women were included in the self-applied questionnaire. If the pregnant women had doubts about filling out the questionnaire or had reading and writing difficulties, field supervisors helped them. Supervisors and coders reviewed the responses of the interviewees before being typed. Whenever possible, inconsistencies were corrected. Instrument for the screening of violence The questions for the screening of violence during pregnancy were obtained from the Brazilian version of the WHO VAW instrument [19]. For psychological (emotional) violence, women were asked: since you became pregnant has someone V1) insulted you or made you feel bad about yourself?; V2) belittled or humiliated you in front of others?; V3) intimidated or scarred you on purpose?; V4) threatened to hurt you or somebody you care about? [19]. Regarding physical violence during the actual pregnancy women responded to the following questions: since you became pregnant has someone V5) slapped you or thrown something at you that could hurt you?; V6) pushed or shoved you, hit you with a fist or something else that could hurt?; V7) hit you with his/her fist or with some other object that could have hurt you; V8) kicked, dragged or beaten you up?; V9) choked or burnt you on purpose?; V10) threatened you with, or actually used a gun, knife or other weapon against you? [19]. The last three questions dealt with sexual violence: since you became pregnant V11) has someone ever physically forced you to have sexual intercourse against your will?; V12) have you ever had sexual intercourse because you were afraid of what your partner might do?; V13) has someone ever forced you to do something sexual you found degrading or humiliating? [19]. The response options for each of these questions were the following: a) never (coded as zero), b) once (coded as 1), c) a few times (coded as 2), and d) many times (coded as 3) [19]. Raw data from the São Luís and Ribeirão Preto samples are available as S1 and S2 Files in excel format. Statistical analysis Based on the Brazilian version of the WHO VAW instrument [19], 13 observed variables were used (V1 to V13). Latent dimensions psychological violence (considering the four questions about emotional abuse), physical violence (considering the six questions about physical abuse) and sexual violence (considering the three questions about sexual abuse) were hypothesized. The unidimensional model of violence consisted of the 13 observed variables and the multidimensional model consisted of three latent dimensions: emotional, physical and sexual. Cronbach's alpha was calculated in Stata 13.0. Confirmatory factor analysis was performed using the Mplus software, version 7. Since all variables were categorical, the mean-and-variance-adjusted weighted least squares estimator was used. To determine whether the models showed good fit we considered: a) a p-value (p) larger than 0.05 for the Chi-squared test (x 2 ) [13]; b) a p-value of less than 0.05 and an upper limit of the 90% confidence interval of less than 0.08 for the Root Mean Square Error of Approximation (RMSEA) [14]; c) values higher than 0.95 for the Comparative Fit Index and the Tucker Lewis Index (CFI/TLI) [14]; and d) Weighted Root Mean Square Residual (WRMR) values of less than 1 [14]. The unidimensional model (Model 1) included the 13 observed variables (V1 to V13) forming the violence construct. The multidimensional (Model 2) followed the WHO VAW proposal. At the first level, the latent dimensions psychological violence, physical violence and sexual violence were analyzed based on their observed variables. At the second level, it was determined whether these three latent dimensions formed the violence construct. From this step onward, the modindices command was used for suggestions of modifications of the initial hypothesis. When the proposed modifications were considered to be plausible from a theoretical viewpoint, a new model was elaborated and analyzed. The difftest was used to calculate the difference between the chi-squared values of the models [14]. Models were fitted separately for each city and a third model combining data from the two settings was also tested. All women gave written informed consent to participate in the study and for those younger than 18 an accompanying adult also signed the consent form. All subjects were informed that the BRISA prenatal cohort was investigating risk factors for preterm birth, and that confidentiality, image protection and nonstigmatization were guaranteed to all participants. Results Cronbach's alpha was 0.76 for general violence, 0.76 for psychological, 0.76 for physical and 0.65 for sexual violence in São Luís. For Ribeirão Preto, it was 0.82 for general violence, 0.79 for psychological, 0.85 for physical and 0.82 for sexual violence. For the combined model including both cities, it was 0.80 for general violence, 0.78 for psychological, 0.82 for physical and 0.78 for sexual violence. In São Luís, the unidimensional model (Model 1) did not fit the data well by any of the indices adopted (RMSEA50.061, CFI50.933, TLI50.920, WRMR51.984). The multidimensional model following the WHO VAW (Model 2) showed good fit (RMSEA50.028, CFI50.986, TLI50.982, WRMR50.962) ( Table 1). The standardized estimates of the factor loadings included in the three latent dimensions psychological, physical and sexual violence were all higher than 0.7 and statistically significant (all p,0.001), and when these three dimensions formed the violence construct, the sexual violence construct presented a factor loading a little below 0.70 (0.65) ( Table 2 and Fig. 1). The highest suggested modification index (25.502) for the WHO VAW model was to include V4 in the physical violence dimension. This modification was considered to be theoretically plausible, forming the multidimensional model 3 (Model 3). This modification resulted in a significant improvement compared to the WHO VAW Model when considering the difference between chi-squares, pvalue,0.001 (Table 1). In this model, with V4 being simultaneously part of the psychological and physical violence dimensions, the V4 factor loading was 0.51 for the psychological dimension and 0.35 for the physical dimension ( Table 2). The highest suggested modification index (16.645) suggested for Model 3 was to include V3 in the physical violence dimension. We tested the former modification for being considered plausible (Model 4). The fit of this model was also superior to the WHO VAW Model (p,0.001) ( Table 1). As a modification to this model, V5 was suggested to load also in the psychological violence, a path that was not considered theoretically plausible. In Ribeirão Preto, the unidimensional model also did not show good fit (RMSEA50.052, WRMR51.807). The multidimensional WHO VAW was tested with the Ribeirão Preto data, and showed a good fit (RMSEA50.035, CFI50.989, TLI50.987, WRMR50.992) ( Table 3 and Fig. 2). The highest modifications indexes suggested for models 2 and 3 in Ribeirão Preto were the same as those suggested for São Luís. The modification index suggested for model 4 in Ribeirão Preto was a crossloading of V6 on sexual dimension, what was considered implausible based on theory. Models 3 and 4, suggested by the modification indices, showed significantly superior adjustment compared to the original WHO VAW model, all p values ,0.001 (Table 4). Analysis of the Ribeirão Preto data revealed a negative value for the residual variance of the physical violence dimension, generating an improper solution (Heywood case). This Heywood case was probably provoked by data idiosyncrasy. For this reason, the variable V10 was excluded from the original WHO-VAW to test for consistency. The exclusion of V10 produced adequate estimates results were similar compared to models including V10 (Table 4, last column). For the combined model, the unidimensional model also did not show good fit (RMSEA50.060, WRMR52.570). The multidimensional WHO VAW was tested and showed a good fit (RMSEA50.036, CFI50.983, TLI50.979, WRMR51.312), although the WRMR was found to be a little above the suggested cut-off point ( Table 5 and Fig. 3). The highest modifications indexes suggested for models 2 and 3 were the same as those suggested for São Luís and Ribeirão Preto. Models 3 and 4, suggested by the modification indices, showed significantly superior adjustment compared to the original WHO VAW model, all p values ,0.001 (Table 6). Discussion The present study, by applying and validating the WHO VAW instrument in two Brazilian cities with confirmatory factor analysis, underscores the need to consider violence against pregnant women as a multidimensional phenomenon, with its psychological, physical and sexual sub-scales. The unidimensional models (combined, including both cities and separated models for each city) did not present good fit, showing that the 13 variables of the WHO VAW instrument did not form a single violence scale. These findings support the view that violence against pregnant women is a complex multidimensional phenomenon [2,3]. The multidimensional second-order models (combined, including both cities and separated models for each city) as proposed by the WHO VAW showed good adjustment, confirming what had already been shown in studies that used exploratory factor analysis [11,12]. The WHO VAW model was improved by the modification indices proposed, indicating that questions V3 (has someone ever intimidated or scarred you on purpose) and V4 (has someone ever threatened to hurt you or somebody you care about) seem to measure simultaneously the psychological and physical dimensions of violence. This was noted for the combined model including the two cities and also for the two separated models fitted for each city. Overlap of questions regarding psychological and physical violence had already been suggested by the first validation study of the WHO VAW questionnaire, with questions V5 and V6 (pushed or shoved you, hit you with a fist or something else that could hurt) of physical violence showing cross loadings with psychological violence only in one of the two sites studied, i.e., the Wooded Zone of Pernambuco [11]. Cross loadings of sexual violence and the other dimensions of psychological and physical violence were also observed in the Brazilian [11], but not in the present study. Differences in study samples may have possibly contributed to these findings since we only interviewed pregnant women, while the WHO Multicountry Study also included non-pregnant women. It is important to note that Cronbach's alpha for sexual violence was higher for the Ribeirão Preto sample than for the São Luís sample. We hypothesize that the way in which sexual abuse questions were asked in the WHO VAW questionnaire was closer to its cultural meaning or to the actual experience of this form of abuse in the Ribeirão Preto sample. The use of a convenience sample limits the external validity of the findings. It is unlikely that recall bias occurred in the responses to violence questions during the gestational period, since this period is short and the data were collected during the second trimester of pregnancy. A differential aspect of the present study was the validation of the WHO VAW instrument according to the multidimensional theory of violence for use in the self-applied form by pregnant women from different socioeconomic contexts, by means of confirmatory factor analysis. Thus, violence is a multidimensional second-order construct consisting of psychological, physical and sexual dimensions. The WHO VAW models and the modified ones showed good fit and are suitable to measure violence against pregnant women. Considering that the model proposed by the WHO is more parcimonious and also showed good fit, it could be preferentially used in the measurement of violence against women. The use of a validated questionnaire containing questions about psychological, physical and sexual violence can help the professionals who provide prenatal care to better screen for this phenomenon.
v3-fos-license
2022-09-13T06:17:29.052Z
2022-09-01T00:00:00.000
252199047
{ "extfieldsofstudy": [ "Computer Science", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1010399&type=printable", "pdf_hash": "c7d9727feb93754b0c1876b28990e70eec4dcca1", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:146", "s2fieldsofstudy": [ "Biology" ], "sha1": "8ad4cca682e275a53096c08b1df44c29eab3c804", "year": 2022 }
pes2o/s2orc
Rhythmicity is linked to expression cost at the protein level but to expression precision at the mRNA level Many genes have nycthemeral rhythms of expression, i.e. a 24-hours periodic variation, at either mRNA or protein level or both, and most rhythmic genes are tissue-specific. Here, we investigate and discuss the evolutionary origins of rhythms in gene expression. Our results suggest that rhythmicity of protein expression could have been favored by selection to minimize costs. Trends are consistent in bacteria, plants and animals, and are also supported by tissue-specific patterns in mouse. Unlike for protein level, cost cannot explain rhythm at the RNA level. We suggest that instead it allows to periodically reduce expression noise. Noise control had the strongest support in mouse, with limited evidence in other species. We have also found that genes under stronger purifying selection are rhythmically expressed at the mRNA level, and we propose that this is because they are noise sensitive genes. Finally, the adaptive role of rhythmic expression is supported by rhythmic genes being highly expressed yet tissue-specific. This provides a good evolutionary explanation for the observation that nycthemeral rhythms are often tissue-specific. Detection of rhythmic gene expression We mainly used the algorithm GeneCycle to detect rhythmic patterns in gene expression time-series (See Methods for more details). We checked that density distributions of p-values obtained from rhythm detection methods used in this paper produced expected left-skewed distributions (Figs 3 and 5). For each gene or protein with several data (several ProbIDs or transcripts), we combined p-values by Brown's method using the EmpiricalBrownsMethod R package. Fig 4 shows the density distribution of p-values obtained after this Brown's normalization for the transcriptomic data in Ostreococcus. 2 Gene expression level Figs 6 and 7 show that distributions of the mean (Fig 6) and the maximum (Fig 7) expression level calculated over time-points (See Methods) can be considered as normally distributed, so relevant for our statistical analysis that need these initial conditions. Gene expression costs To produce a given steady state protein level N p , the cell request energetic costs at transcriptional level (C RN A ) and at translational level (C p ): N RN A : abundance of transcripts L RN A : transcript length c nt : averaged nucleotide synthesis cost of the transcript N p : abundance of proteins L p : protein length c AA : averaged AA synthesis cost of the protein The number of ATP molecules consumed is in averaged around 30∼P for one amino-acid (AA) molecule produced (estimated in E. coli ) (1), against 49∼P for one single nucleotide (estimated in yeast and E. coli ) (1; 2). The median length is estimated around 1500 nucleotides for transcripts against 400 AA for proteins in yeast (1; 2). And, the abundance of transcripts is in average 1.2 molecules per cell (1), against 2,622 proteins per cell (yeast) (3) (9 and 6,384 molecules respectively found recently by Lahtvee et al. (4)). Thus, even whether the precursor synthesis per nucleotide is 1.6 fold costlier than for AA molecule and proteins are around 3.75 fold shorter than transcripts, N p /N RN A is on the order of 1,000, so the cost at the protein level is 160 times that at the RNA level (even higher at slower cellular growth-rate). Finally, protein costs are even more higher since chain elongation costs are larger than those of nucleotides polymerization since the nucleotide precursors are already activated molecules (rNTP) whereas chain elongation needs costs of charging tRNAs with AA (2). Thus, because C RN A << C p , C p gives an estimate broadly representative of the expression costs per gene. Finally, since we compared these costs between genes, the other costs such as degradation or chain elongation costs, should not change our results. Degradation may cost essentially nothing (1) and its cost must be correlated with C p since longer or higher expressed proteins need more chain elongation activity and degradation rate (see next section), so does not modify the comparison of expression costs between genes. Our results support the hypothesis also claimed by Wang et al. (5) that cycling expression of the more expensive genes is a conserved strategy for minimizing overall cellular energy usage. In this study, we provide new results based on relevant data. Indeed, data used by Wang et al. (5) for the calculation of costs appears to be biased, partly because: i. translation rates come from fibroblasts cells (6); and ii. there were errors in the estimation of protein levels resulting in a systematic underestimation of protein levels and derived translation rate constants (Cf Corrigendum (6)). 4 Expression costs, decay and half-lives Costs of protein decay Protein half-life is the time requested to reduce the protein amount by 50%. In terms of protein concentration, assuming constant protein production, protein half-life depends on protein degradation and cell dilution (cell's growth) (equation (3) and (4)). We assume that spontaneous degradation of proteins is negligible. δ sdeg : spontaneous decay of unstable molecules δ deg : active protein degradation rate δ dil : cellular dilution Costs of protein decay are negligible enough to not be opposed by selection. Indeed, Lynch and Marinov (2015) and Wagner (2005) have shown that "degradation in a lysosome may cost essentially nothing, and amino-acid export back to the cytoplasm consumes ∼1 ATP for every 3 to 4 amino acids". Compared with the unique cost of producing one single nucleotide which consume 49∼P, protein decay costs becomes negligible comparatively to transcriptional costs, which are themselves negligible comparatively to translational costs. All the more, given that amino acids from degradation are reused and do not need to be produced by the cell, which therefore economizes around 30 ∼P per amino-acid. Half-lives of rhythmic proteins In rapidly growing bacterial cells, dilution is often more significant than degradation (7; 8), whereas in no-dividing and slowly dividing cells such as differentiated mammalian cells, protein half-life rely mostly on degradation because dilution is negligible (7). Half-life might be an inappropriate parameter regarding rhythmic proteins. Indeed, half-life depends on protein production rate and degradation rate which, for rhythmic proteins, vary over time. We think that protein half-life should be a constant property of proteins and, therefore, is difficult to understand for rhythmically expressed genes (although it is discussed by Lück et al. (9)). Suppose we can deal with protein half-life property for all genes. Since higher expressed genes have longer halflives (they are more stable) (10; 11) and because highly expressed genes tend to be rhythmic, the expression costs for maintaining high level of these proteins should be reduced (due to their longer life-span). That is why, the cost per time-unit of maintaining the steady-state protein level might not be correlated with our estimation of expression cost (C p ). I.e., rhythmic proteins for which our results suggest that they are rhythmic because they are costlier (and costlier because they are highly expressed) would, in fact, require less energetic costs to be expressed than expected because these protein molecules tend to have longer half-lives (they live longer). On the other hand, models suggest that a longer half-life damps the amplitude of the periodic expression (9); as supported by observations showing that non-rhythmic proteins have much longer half-lives than rhythmic ones (in mouse fibroblast) (11). Thus, highly expressed genes should be less rhythmic (due to amplitudes damped by longer half-lives) and therefore, less energy saved over 24 hours. Consequently, the rhythmic expression of costly proteins (as our results suggest) should not involve proteins with longer half-lives. Thus, rhythmic and highly expressed proteins remain costlier for the cell per time-unit than rhythmic and lower expressed proteins. Therefore, our results remain consistent even when considering protein half-lives. Averaged AA synthesis cost and protein length Before applying the t-tests, we checked that averaged AA synthesis costs and the length of proteins were normally distributed in both groups (rhythmic, first 15%, and non-rhythmic) (Figs 8 and 10). We also verified that quantile-quantile-plots showed comparable distributions between the theoretical distribution and the empirical distribution for both groups (Figs 9 and 11). 6 Gene expression noise 6.1 Transcriptional noise is the main source of the overall noise Relatively to translational noise, transcriptional noise is the main driver of the overall noise (12) and should give a good estimation of the output noise. Indeed, based on estimations of coefficient of variations (CV, cell-to-cell variations of protein level) for diverse transcription and translation rates in E. coli and S. cerevisiae, Hausser et al. (13) have shown that for a fixed transcriptional rate, CV is almost constant for diverse translational rates. Thus, changes in protein level have little to no impact on gene expression noise. The availability of mRNA molecules seems to drive the final noise. I.e., comparatively to the noise caused by the translational activity, the availability of low number molecules such as transcriptional factors (subject to the stochasticity of diffusion and binding in the cell environment) is the main factor of the output cell-to-cell variation in protein abundances. Rhythmic proteins and expression noise In the case of genes with constant mRNAs abundances and with rhythmic proteins, we expect that rhythmic expression at the protein level cannot originate from selection on expression noise. Indeed, since increasing transcription at constant protein abundance decreases expression noise (13; 14) ( Fig 1A), we should theoretically expect to improve expression precision (decrease noise) when protein level decreases for an equi-mRNA level, because the translational efficiency decreases (i.e. less proteins are produced per mRNA molecule). Thus, in this case, we expect the gene expression precision to be highest when the protein level is at the valley and lowest when the protein level is at its peak which we presume to be near to the optimal protein level ( Fig 1B: Theoretical plot). Based on estimations of coefficient of variations (CV, cellto-cell variations of protein level) for diverse transcription and translation rates in E. coli and S. cerevisiae, Hausser et al. (13) have shown that for a fixed transcriptional rate, CV is almost constant for diverse translational rates (13). Thus, the translational efficiency might not be the main driver of expression noise (Fig 1B: last plot). The availability of mRNA molecules drives the final noise, i.e. mainly the availability of transcriptional factors. Thus, this suggest that, in both cases, rhythmicity at protein level cannot be due to a selection on noise. Finally, we remain attentive to the fact that all these considerations presume an immediacy between expression noise and biological functionality. The expression noise decreases with increasing transcription for an equi-protein level (13). In the case of rhythmic mRNAs abundance and constant protein level, the expression noise decreases when mRNAs level reaches its peak. B) Theoretical expression noise variation in the case of constant mRNAs abundance and rhythmic proteins. Theoretically, the expression noise is correlated with the translational efficiency. However, observations in E. coli and S. cerevisiae show that for a fixed transcriptional rate (that we assume to be correlated with mRNAs level), the expression noise (CV) is almost constant for diverse translational rates (13). Noise level is based on a single timepoint scRNA dataset Indeed, our noise estimation is reported at a single timepoint (unknown) for a cell population (scRNA). Since the peak times of rhythmic genes is largely distributed (Fig 2a), we expect the mean noise of a given time-point to be general to all time-points (Fig 2b). Noise estimation We compared several noise estimation methods and found that the F * polynomial degree of Barroso et al. (15) method was the best method across all datasets and especially the most efficient method in controlling for the effect of mean expression (S7 Table). To do this, we calculated the slope of the linear model which best fit the correlation between the noise and mean expression (Figs 12 and 13), as well as the R 2 and the Kendall correlation (τ ) , applied to normally distributed noise estimations. The linear model that explained the variance the least well (small R 2 ) was considered to be approximately the best expected value. Indeed, we expect a good noise estimation model to be independent of the mean gene expression (linear regression slope and Kendall's τ near to 0) and with a large range of noise values for every mean expression (small R 2 ). The F * polynomial degree of Barroso et al. (15) was the best method because it removed the correlation between mean expression and F * (smallest linear regression slope and Kendall's τ ) and keep a large range of noise values for every mean expression (smallest R 2 ) (S7 Table). If none of Kendall's correlation was not significantly uncorrelated (all Kendall's p-value<0.05), we used the polynomial degree which seem to maximally removed residual correlations (smallest Kendall's τ and smallest linear regression slope). Thus, we used F * : degree 3 for mouse liver (Kendall's τ = -0.0288, p-value = 1.6e-07, linear regression slope = 1.14e-13); degree 3 for mouse lung (Kendall's τ = -0.0545, p-value < 2.2e-16, linear regression slope = 3.40e-15); degree 3 for mouse muscle (Kendall's τ = -0.0795, p-value < 2.2e-16, linear regression slope = 9.76e-15); degree 4 for mouse heart (Kendall's τ = -0.0445, p-value < 2.2e-16, linear regression slope = -3.75e-13); degree 4 for mouse aorta (Kendall's τ = -0.0183, p-value = 4.70e-04, linear regression slope = 4.83e-14); degree 4 for mouse kidney (Kendall's τ = -0.0528, p-value < 2.2e-16, linear regression slope = 1.04e-13); and degree 4 for Arabidopsis roots (Kendall's τ = 0.0091, p-value = 0.1162, linear regression slope = -9.78e-15). Mean expression SGE measure Figure 13: a-f ) Relationships between different stochastic gene expression (SGE) estimations and the mean gene expression, using discretive display with error-bars that represent the middle 95% of values. The 3 horizontal lines separate the values in 4 quartiles. The dots represent the median of the values. All parameters are log transformed. F * is the noise estimated by the method of Barroso et al. (15). F * min is the first polynomial degree which break the correlation between the noise with mean expression, and F * max is the next one.
v3-fos-license
2021-11-17T16:32:53.223Z
2021-11-01T00:00:00.000
244144283
{ "extfieldsofstudy": [], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2072-6694/13/22/5681/pdf", "pdf_hash": "1cc1b93354f727317cf549c49721da0c6d17abf6", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:147", "s2fieldsofstudy": [ "Biology" ], "sha1": "38d5d5333b18ab007e12d0366628f69b1ef45e9c", "year": 2021 }
pes2o/s2orc
Biological Significance and Targeting of the FGFR Axis in Cancer Simple Summary All cells within tissues and organ systems must communicate with each other to ensure they function in a coordinated manner. One form of communication is signalling mediated by small proteins (for example fibroblast growth factors; FGFs) that are secreted by one cell and bind to specialised receptors (for example FGF receptors) on nearby cells. These receptors propagate the signal to the nucleus of the receiving cell, which in turn dictates to the cell how it should react. FGFR signalling is versatile, tightly controlled and important for normal body homeostasis, facilitating growth, healing and replacing old cells. However, cancer cells can take command of this pathway and use it to their advantage. This review will first explain the biology of FGFR signalling and then describe how it can be corrupted, the implications in cancer, and how it can be targeted to improve cancer therapy. Abstract The pleiotropic effects of fibroblast growth factors (FGFs), the widespread expression of all seven signalling FGF receptors (FGFRs) throughout the body, and the dramatic phenotypes shown by many FGF/R knockout mice, highlight the diversity, complexity and functional importance of FGFR signalling. The FGF/R axis is critical during normal tissue development, homeostasis and repair. Therefore, it is not surprising that substantial evidence also pinpoints the involvement of aberrant FGFR signalling in disease, including tumourigenesis. FGFR aberrations in cancer include mutations, gene fusions, and amplifications as well as corrupted autocrine/paracrine loops. Indeed, many clinical trials on cancer are focusing on targeting the FGF/FGFR axis, using selective FGFR inhibitors, nonselective FGFR tyrosine kinase inhibitors, ligand traps, and monoclonal antibodies and some have already been approved for the treatment of cancer patients. The heterogeneous tumour microenvironment and complexity of FGFR signalling may be some of the factors responsible for the resistance or poor response to therapy with FGFR axis-directed therapeutic agents. In the present review we will focus on the structure and function of FGF(R)s, their common irregularities in cancer and the therapeutic value of targeting their function in cancer. Introduction Cancer is a disease of cells, starting with genetic alterations in one cell or a small group of cells. If the repair machinery of the cells fails, then accumulation of genetic alterations will lead to cancer and with time to metastasis. In order for cells to become cancerous, they need to adopt behavioural changes outlined as the "hallmarks of cancer" [1]. Of course, besides the classical hallmarks of cancers, many years of research from different angles has shed light onto novel emerging hallmarks of cancer, such as an altered microbiome, neuronal signalling, epigenetic dysregulation and transdifferentiation [2]. There are many targeted therapies that inhibit and block each of the developed competencies necessary for the growth and progression of tumour development. A number of these approaches target tyrosine kinase receptors, such as the human epidermal growth factor receptor 2 (HER2), epidermal growth factor receptor (EGFR), vascular endothelial growth factor receptor 2 (VEGFR2), platelet derived growth factor receptor (PDGFR) and FGFR, in various ways. Acknowledging that many other tyrosine kinase receptors merit therapeutic targeting, this review will focus and discuss further the importance and different ways of targeting the FGF/FGFR axis. Fibroblast growth factor receptor (FGFR) signalling plays a pivotal role in a myriad of processes including embryonic development, cell differentiation, proliferation, wound healing, cell migration, angiogenesis and various endocrine signalling pathways [3]. Dysregulation of FGFR signalling can lead to an antiapoptotic, mutagenic and angiogenic response in cells, all of which are cancer hallmarks [4]. The oncogenic potential of FGFR signalling also lies in its potential to serve as an escape mechanism for acquired resistance to cancer therapy. To appreciate the therapeutic value of targeting FGFR signalling in cancer, we will first consider normal structure and function, then discuss how aberrant FGFR signalling can influence cancer progression and, finally, describe how it can be targeted. FGF(R) Structure In humans, the fibroblast growth factor (FGF) family comprises 22 [5][6][7]. Each FGF ligand comprises a conserved core region of 120 amino acids and shares between 35% and 50% sequence homology [6]. Despite being similar in structure, only eighteen FGFs are reported to signal via FGFR, namely FGF1 to FGF10 and FGF16 to FGF23 [8]. Other FGF ligands, such as FGF11 to FGF14, which also share a similar structure to other ligands, do not bind to these receptors but instead can function via voltage-gated sodium channels [9], although recent work casts doubt on this dogma [10]. Five FGF subfamilies (FGF1, -4, -7, -8, -9) are characterised as paracrine signalling molecules that signal by forming a three-way complex with FGFR and heparan sulphate proteoglycans (HSPGs). The other two subfamilies (FGF11 and FGF19) act differently; the FGF11 subfamily act intracellularly, while FGF19 have a reduced HSPG binding affinity and bind to αKlotho and βKlotho cofactors to function in an endocrine manner to have an impact on adult homeostasis and metabolism [8,11]. FGFRs have extracellular immunoglobulin-like (Ig) domains 1-3 (D1-D3), a transmembrane (TM) domain, tyrosine kinase I, and II domains, a carboxyl-terminal, and an acidic box [12] (Figure 1). The D2 and D3 regions form a ligand-binding pocket for two FGF ligands and two heparin molecules [13]. The acidic box is responsible for the autoinhibition and regulation of optimal interactions from bivalent cations ( Figure 1). The interaction between the acidic box and the heparan sulphate-binding site inhibits activation of the receptor when FGF is absent [8,[14][15][16]. FGF binds in the Ig2 and Ig3 domains, where HSPGs protect FGFs from protease-mediated degradation, thus stabilising the FGF-FGFR complex (Figures 1 and 2A) [17]. Hence, high-affinity FGFRs are activated upon FGF ligand binding. Paracrine FGFs bind strongly to HSPGs, which possess cofactor functions to prevent the FGFs from diffusing through the extracellular matrix (ECM) as well as regulating the FGFR specificity [11,18]. , and an intracellular tyrosine kinase domain that is split into two (TK1 and TK2). The FGF-FGFR complex is composed by two FGFs, two FGFRs and one heparin sulphate proteoglycan (HSPG). The TK domains are transphosphorylated upon ligand binding between Ig2-Ig3 and receptor dimerisation. This initiates the interaction between a network of downstream signalling molecules that can activate key pathways, such as MAPK, AKT, PLCγ, STAT1 and in turn regulate target genes involved in cell proliferation, migration, differentiation, survival, resistance to anticancer agents and neoangiogenesis. Signalling can be negatively regulated by SEF, FGFR-like 1 (FGFRL1), sprouty (SPRY) and MAPK phosphatase 1 and 3 (MKP1 and MKP3) at different levels. Created with BioRender.com (accessed on 26 September 2021). Aside from the four main FGFR family members (FGFR1-4), there is an additional receptor, fibroblast growth factor receptor like 1 (FGFRL1 or FGFR5), that can bind to FGFs and heparin, but lacks the tyrosine kinase domain and therefore cannot signal via transphosphorylation [19]. FGFR1L is believed to negatively regulate FGFR signalling by acting as a decoy receptor that neutralises FGFs by binding to them without activating any downstream signalling cascade [20]. FGFRL1 is expressed mainly in musculoskeletal tissues and the kidney and its main function is to control the growth of the metanephric kidney [21]. It is hypothesised that its function depends on Ig2 and Ig3 domains interacting together with an FGF ligand and another molecule from the surface of other cells from their microenvironment [21]. In fact binding of FGF8 to FGFRL1 plays an important role in developing kidneys by driving the formation of nephrons [22]. FGFR Splicing Despite the high sequence homology between FGFR family members (55%-72%) and their similar structural characteristics, there are a variety of isoforms ( Figure 2) [23]. FGFR diversity is not only attributed to the different genes that can encode FGFR1-4 and the multiple FGFs that can activate them, but also to the fact that FGFR genes can be alternatively spliced ( Figure 2). (B) Splice variant at the Ig3 loop occurs in FGFR1-3. This splice variant is responsible for ligand binding specificity and is generated by alternative splicing at the Ig3. The first portion of the Ig3 is the exon "a" (exon 7) that is spliced to either exon "b" (exon 8) or exon "c" (exon 9) and then to the exon that encodes the TM domain. The "b" isoform is mainly expressed by epithelial tissues/cells, whereas the "c" isoform is expressed by mesenchymal tissues. FGFs have differential specificity to different isoforms. (C) Splice variants can generate soluble variants without TK activity, truncated to one or more Ig domains and missing the TM domain. Variants lacking the TK domain can heterodimerise with full length FGFRs to generate nonfunctional dimers and therefore act as down regulators [24]. (D) FGFR4 can generate a single isoform containing the "c" exon (exon 9) in the Ig3 domain. (E) FGFR2 can generate a splice variant missing Ig1 and Ig3 containing the "b" exon (exon 8). (F) FGFR1 and FGFR2 can also generate a splice variant with truncated Ig1 and Ig3 containing the "c" exon (exon 9). SP: signal peptide, Ig: Immunoglobulin, AB: acid box; TM: transmembrane domain, UTR: untranslated region. Created with BioRender.com (accessed on 26 September 2021). FGFR genes consists of 18 exons (Figure 2A). Each gene can be alternatively spliced and produce different mRNAs that consequently will result in FGFR protein diversity [25,26]. FGFRs 1-3 each can generate two splice variants of the immunoglobulin (Ig)-like domain 3b and 3c, which are fundamental to ligand-binding specificity during development and are often mis-spliced in cancer [5,8,11]. Hence, there are seven main signalling FGFRs, FGFR1b, FGFR1c, FGFR2b, FGFR2c, FGFR3b, FGFR3c and FGFR4, encoded by four genes. Each specific FGFR binds to specific FGFs and most FGF ligands can bind to several different variants of FGFRs. The FGF binding specificity to FGFRs is regulated by two distinct splice variants of exon 8 and exon 9 of domain 3 (D3) [6] (Figure 2A,B). The splicing variant of exon 7/8 and exon 7/9 encodes the carboxyl-terminal of the domain D3, resulting in the -b or -c isoform [27][28][29][30][31]. In human tissues, the -b isoforms are confined to epithelial cells, with the -c isoform predominating in mesenchymal lineages [6]. The specificity of the ligand binding to FGFRs differs amongst isoforms -b and -c ( Figure 2B) [6]. For instance, FGF4 binds to the FGFR1-3c isoform, while FGF7 binds specifically to the FGFR1b and -2b isoforms ( Figure 2B). In conclusion, the exon rearrangement at the Ig3 loop (D3) has a profound effect on the FGF spectrum for each receptor, with the FGFR1-3b isoforms having a more limited binding affinity with FGFs compared to the FGFR1-3c isoforms ( Figure 2B) [6]. Interestingly, it has been reported that reversible switching of the FGFR2-3b isoform to the -c isoform was induced by exogenous and endogenous FGF1 and FGF2. This switch was confluence and cell cycle dependent [32]. Altered splicing has been associated with cancer progression [33-35], for example during EMT [33,36]. FGFR1 and -2 also have another isoform -a, in which exon 7 joins directly with exon 10, the TM domain. This truncated variant is a secreted protein that is incapable of signal transduction and has an autoinhibitory role [37]. In bladder cancer, the switch from the FGFR1a to the FGFR1b isoform increased FGF1-induced activation of the latter and was associated with the tumour grade and stage, likely due to it giving a proliferation advantage [38]. In addition, there are secreted FGFR isoforms that lack the TM domain and the entire cytoplasmic region [23,39,40] ( Figure 2C). There are also reports of truncated FGFR isoforms lacking Ig1 [39-42] ( Figure 2E). The truncated Ig1 isoform is known to be a high affinity oncogenic variant that can activate various downstream signalling pathways, due to the Ig1 region performing an autoinhibitory role [6,24,43,44]. Interestingly, there are also isoforms missing the acid box between the Ig1 and Ig2 loops (i.e., truncated Ig1 FGFR2b and FGFR3c) [45] and other isoforms missing the carboxyl terminal [13]. In fact, such a variant missing the inhibitory carboxyl terminal portion of FGFR2 was expressed in a breast cancer cell line (SUM-52PE), along with other splice variants, with the different splice variants having different transforming activities [43]. Variants expressing the C3 carboxyl terminus resulted in more autonomous signalling, cell growth, and invasion [43]. Recently, a novel FGFR3 splice variant was reported in African American prostate cancer (FGFR3-S) that was associated with a poor prognosis and increased cell proliferation and motility [44]. The FGFR3-S variant lacked exon 14, comprising 123 nucleotides that encode the activation loop in the split kinase domain [44]. FGFR4 is well defined as it is only produced in a single isoform homologous to the FGFR1-3c splice variant [46] ( Figure 2D). Although their properties are not well understood at present, there are also reports of FGFRL1 isoforms with an absent Ig1 domain with and without the acid box [47]. As discussed, the specificity of FGF-FGFR binding is determined by alternative splicing, ligand specificity and tissue specific expression of both FGF ligands and receptors [5,6]. Further control of the FGF-FGFR coupling is provided by the interaction with secreted proteins and plasma membrane bound receptors, such as α and β Klotho proteins [5,51,52], and a single-pass transmembrane Klotho-related protein (KLPH). These act as cofactors for the endocrine FGFs by forming an FGF-FGFR-Klotho ternary complex [53,54]. Dimerisation of the FGFR causes a conformational shift in the receptor's structure, leading to a 50-to 100-fold increase in the receptor kinase activity, resulting in the phosphorylation through mutual transphosphorylation of numerous tyrosine residues in the intracellular domain. Subsequently, various protein complexes are formed to initiate downstream signalling transduction [4,9,11,12,55,56]. One of the adaptor proteins that governs the downstream signalling cascade is the v-crk sarcoma virus CT10 oncogene homolog (Crk) (Figure 1) [48]. Upon FGFR phosphorylation, Crk gets transiently phosphorylated and can be associated with son of sevenless (SOS), that in turns activates small GTPases [49,51,52,57]. With the inherent complexity of the modes of activation, transduction and biological output, it is not surprising that the orchestration of FGFR signalling is tightly regulated. We previously discussed the FGFR autoregulation via the acid box in Ig1 and its association with ligand binding Ig2 and Ig3 domains ( Figure 1). However, there are several mediators associated with controlling the signalling output from activated FGFRs. Some of the known negative regulators are sprouty proteins that are induced by FGF signalling (Figure 1) [72,73]. Furthermore, FRS2 can be phosphorylated by MAPK on serine and threonine residues, inhibiting GRB2 recruitment and producing a negative feedback loop [74,75]. Other negative modulators of the FGF signalling pathway are the transmembrane proteins, similar expression to FGF genes (SEF), anosmin-1, fibronectin-leucine-rich transmembrane protein 3 (FLRT3), FGFRL1 and MAPK phosphatases (MKP) that can also interfere with the activation of downstream signalling pathways [76][77][78][79] (Figure 1). In addition to the above, the stimulated FGF-FGFR complex can be completely blocked by internalisation and subsequent lysosomal degradation. The E3 ubiquitin ligase Cbl binds to activated FRS2 and facilitates FGFR ubiquitination by acting as a signal for receptor degradation [80]. FGFR signalling clearly has profound direct effects on cancer cells. However, the FGFR axis also impacts on angiogenesis and this is an emerging field of translational medicine [81]. FGF2 has been heavily implicated as a proangiogenic factor, promoting endothelial proliferation and migration following FGFR1/2 signalling and VEGF/angiopoietin 2 secretion [82], and has been shown to mediate resistance to VEGFR targeted therapy in cancer [83]. In addition, other FGFs, such as FGF5 and FGF18, can promote angiogenesis through endothelial FGFR activation [84,85]. Interestingly, a recent study revealed an association of an FGFR1 mutation with spontaneous haemorrhage in paediatric and young adult low grade glioma, though the specific mechanism remains unclear [86]. In urothelial carcinomas, FGFR3 was able to induce a proangiogenic phenotype, suggesting that constitutive activation of FGFR3 may be able to potentiate growth factor signalling in the tumour microenvironment and implicating FGFR3 as a potential therapeutic target from an antiangiogenic perspective [87]. As with other behaviours, the effects of FGFR signalling can be context specific. In an embryoid body model, FGFR1 negatively regulated angiogenesis by altering the balance of cytokines, such as interleukin-4 and pleiotrophin [88]. Examples of the Involvement of FGFR Signalling in Development Before discussing how FGFR signalling can drive cancer, it is important to understand how it is involved in development and why such a pleiotropic and dynamic pathway can be key in disease development. FGFR signalling plays a fundamental role in cell proliferation and migration. However, during embryonic development, FGF signalling regulates differentiation and the cell cycle. FGF signalling is important as early as in the preimplantation of embryos in mammals. For example, FGF4 is expressed in the morula and then in epiblast cells of the inner cell mass [89] and facilitates cell proliferation and the formation of the ectoderm [90,91]. There are reports of FGFR1 and FGFR2 in the inner cell mass and FGFR2 also in the embryonic ectoderm [92]. Later in development it has a vital role in organogenesis, particularly regulating the reciprocal crosstalk between epithelial and mesenchymal cells [93,94]. For example, FGFR2 plays an important function in both the ectoderm and mesenchyme during limb development [7]. More broadly, mesenchymal cells express FGFs, such as FGF4, 7, and 10, leading to downstream signalling activation through the epithelial 3b splice variant of FGFR1 and -2 in the epithelium and as a result, facilitate lung, salivary gland, intestine and limb development [95][96][97]. In contrast, epithelial tissue can secret FGFs 8 and 9 that activate FGFR1 and FGFR2-3c isoforms in the mesenchymal tissue [98,99]. However, organogenesis is not always driven exclusively via paracrine loops [11]. During development of the central nervous system, FGF8 signals in the anterior neural primordium by acting as an autocrine/paracrine factor in the development of the inner ear [100]. The differentiation of the cochlear sensory epithelium is regulated by autocrine/paracrine FGF signalling [101]. More recently it was found that FGFR can interact with N-Cadherin and activated FGFR that in turn facilitates migration of neocortical projection neurons [102]. FGFRs could regulate multipolar neuronal orientation and change them into bipolar cells to enter the cortical plate [102]. Given the importance of FGF signalling to development, it is unsurprising that malfunction can lead to developmental defects. The absence of FGFR1 in genetically modified mice leads to early growth defects [103]. Activated FGFR germline mutations can lead to skeletal disorders, such as a mutation in FGFR3 which can lead to growth defects and human dwarfism achondroplasia (ACH) [104,105]. A variety of inherited syndromes are caused by germline irregularities in FGFR [106]. Furthermore, mutations, especially in FGFR2, can lead to craniofacial malformation syndromes [107]. Aberrant FGFR Signalling in Cancer The pleiotropic function of FGFR and its involvement in crucial physiological processes makes the FGFR signalling pathway a good candidate for facilitating cancer progression. In this section we will highlight the different ways FGFR signalling can be involved in the pathogenesis of cancer ( One way of facilitating malignant progression via FGFR signalling is via a corrupted autocrine/paracrine loop and exon switching. Dysregulation of FGF secretion and FGFR expression in stromal and cancer cells can be a driving force in cancer progression. Many FGFs and their elevated levels are associated with cancer progression, for example FGF1, -2, -6, -8, -10, -19 and -23 [108][109][110][111][112][113][114][115][116][117][118]. Interestingly, FGFs are implicated in EMT in cancer by attributing mesenchymal characteristics in epithelial cells [119][120][121]. In some cases, growth factors (e.g., FGFs) are produced and secreted by one type of cell (for example stromal cells) and stimulate via paracrine signalling another type of cell to signal cell functions, such as proliferation, differentiation and migration [1]. However, cancer cells can synthesise FGFs and create a positive feedback loop via autocrine signalling. For example, in breast and non-small cell lung carcinomas, FGF2 and FGF9 are expressed and activate their respective FGFRs in the same cells [122,123]. Furthermore, FGF10 has been implicated as a key paracrine signal in breast, pancreatic, stomach, skin, lung and prostate cancer [108,124]. The specificity of FGF ligands can be altered through isoform switching and alternative splicing of FGFRs, thereby increasing the range of FGFs that can stimulate cancer cells, depending on the FGFR isoforms they express [8,35]. For example, alternative splicing of FGFR1 is associated with a high tumour grade and stage in bladder cancer [38]. Similarly, FGFR1 alternative FGFR1α/FGFR1β splicing was found to play a key role in breast cancer [34] and FGFR3 splicing promoted aggressiveness in prostate cancer [125]. Deregulation of negative regulators of the FGFR axis can also contribute to aberrant FGFR signalling in cancer. For example, SEF and SPRY expression levels are associated with breast, prostate, ovarian and thyroid cancer progression, with high grade carcinomas expressing lower levels of these negative FGFR regulators [126][127][128]. In contrast, a recent study reported that loss of SPRY1 improved the response to targeted therapy in melanoma [129] and suppression of SPRY1 inhibited triple-negative breast cancer malignancy via enhancing the estrogen growth factor and its receptor (EGF/EGFR) mediated mesenchymal phenotype [130]. Genetic alterations of FGFR can also dysregulate signalling and contribute towards malignant progression. Next-Generation Sequencing (NGS) analysis of 4853 tumours revealed FGFR aberrations in 7.1% of cancers [131]. More specifically, 66% of the aberrations were gene amplification, 26% were gene mutations and 8% were rearrangements [131]. A recent study on advanced urothelial cancer using NGS to analyze cell-free DNA from the plasma of 997 patients, revealed that 20% had FGFR2 and FGFR3 genomic alterations, of which 14% were activating mutations [132]. Activating Mutations The most common types of genetic variation are single nucleotide polymorphisms (SNPs). FGFR2 harbours one of the first SNPs to be identified as a breast cancer susceptibility locus by Genome-Wide association studies (GWAS) [133,134]. Risk alleles of various SNPs found in FGFR2, they are associated with ER-positive cancers [135], increased FGFR2 expression [136], lymph node metastasis in breast cancer [137] and radiation-induced breast cancer risk [138]. More recently another study identified an FGFR2 SNP that was linked with susceptibility to breast cancer in a Chinese population [139]. However, only a few SNP loci are confirmed in FGFR1 that correlate significantly with a breast cancer predisposition [140]. In contrast, a more recent study correlated three FGFR1 SNPs to reduced breast cancer risk [141]. SNPs in FGFR4 but not in FGFR3 were strongly correlated with breast cancer [142]. In breast cancer patients, FGFR4 and FGFR2 SNPs were previously suggested to be candidate pharmacogenomic factors to predict the response to chemotherapy [143]. Notably, SNPs in the FGF/FGFR axis (FGF1, FG18, FGF7, FGF23 and FGF5) were also associated with an increased risk of ovarian cancer [144]. A number of germ line activating point mutations in FGFR1, -2 and -3 are found in prostate, breast, bladder, endometrial, brain, lung, uterus, cervical, stomach, head and neck, colon and melanoma cancers (as reviewed by [145]). These mutations can alter the ligand binding, juxtamembrane and kinase domains and constitutively activate FGFR or impair FGFR degradation, leading to increased FGFR signalling (as reviewed by [111,[145][146][147]. FGFR4 activating mutations are not detected very often, apart from in rhabdomyosarcoma [148] and gastric cancer [149]. Interestingly, some of the activating mutations can result in changes in the efficacy of several inhibitors that can target FGFR, such as AZD4547, BGJ-398, KTI258, AP24534 and JNJ42756493 [150]. FGFR Gene Amplification and Overexpression Elevated FGFR levels can be achieved either via chromosomal amplification or aberrant transcription (Figure 3). In cancer, distinctive FGFR abnormalities are known such as the amplification of genes or post-transcriptional regulation, contributing to overexpression of the receptor. Mutations in FGFRs could generate receptors that are either consistently active or may demonstrate a reduced necessity of activation through ligand binding [11]. The most common abnormalities in malignancies are due to gene amplification of FGFR1, -2 and -3, as well as FGF ligands. Several studies have highlighted that FGFR is amplified in various cancers. For example, FGFR1 expression is amplified in bladder, oral, oesophageal squamous, NSCLC, prostate and ovarian cancers [151][152][153][154]. FGFR1 amplification and overexpression was observed in some patients with lymph node metastasis and advanced pathological stages of hypopharyngeal and laryngeal squamous cell carcinoma [155]. In addition, hormone receptor positive breast cancer patients with metastatic disease had FGFR1 amplification that was associated with a shorter time to progression on first line endocrine therapy [156]. Furthermore, it was suggested that FGFR1 amplification grants resistance to estrogen receptor (ER), PI3K, mammalian target of rapamycin (mTOR) and cyclin-dependent kinase (CDK)4/6 inhibitors [157]. FGFR2 amplification in gastric cancer is associated with a poor prognosis and response to chemotherapy [158]. Chromosomal Translocation The exchange of chromosomal arms (or segments) between heterologous chromosomes, known as chromosomal translocation, is a type of structural chromosomal abnormality that results in fusion genes/proteins. The generated fusion proteins can have oncogenic properties. Chromosomal translocations in FGFRs have about an 8% incidence rate [131]. There are two types of FGFR gene fusions: (1) when only the FGFR tyrosine kinase domain is fused to the 5 end of the fusion protein (the extracellular and transmembrane domain portion of the FGFR is missing from the fusion protein), therefore is constitutively dimerised and active; (2) when the whole FGFR remains intact and acts as the 5 fusion gene that will bind to its partner at the 3 end of the FGFR [147]. The first reports of FGFR fusion genes were in haematological malignancies. The FGFR kinase domain was fused with the N terminus of transcription factors such as ETV6, ZNF198 and BCR in lymphoma/leukaemia patients with myeloproliferative disorder stem cell syndrome [159][160][161][162]. A recent study reported EVT6-FGFR2 fusion protein in a mixed phenotype (T-myeloid/lymphoid) acute leukaemia, that resulted in aberrant FGFR2 tyrosine kinase expression and was correlated with aggressive clinical behaviour and a poor response to therapy [163]. FGFR1, FGFR2 and FGFR3 fusions are also identified in solid tumours, such as lung, colorectal, glioblastoma, breast, head and neck, bladder, cervical cancer and cholangiocarcinoma (as reviewed by [164]). A common fusion is FGFR3 with transforming acidic coiled-coil 3 (TACC3) that induces a constitutive phosphorylation of the tyrosine kinase domain and therefore activation of downstream MAPK and STAT1 pathways that further leads to increased metastatic cell behaviour (e.g., cell proliferation) [165][166][167]. There are several identified binding partners for FGFR2, some of them are TACC3 and CCDC6 in cholangiocarcinoma [166,168] and BICC1 in hepatocarcinoma and colorectal cancer [169]. Examples of FGFR1 fusion partners are HOOK3 in gastrointestinal stromal tumour, TACC1 in glioblastoma and ZNF703 in breast cancer [167,[170][171][172]. A recent genomic profiling study identified ANO3 and NSD1 as fusion partners for FGFR4 in non-small cell lung cancer [173]. Although FGFR fusions are relatively rare in human cancers it might be of interest to identify how patients with FGFR fusions respond to therapy targeting the tyrosine kinase (TK) domain of FGFR. Nuclear FGFR in Cancer FGFRs have been shown to signal via the cell membrane and endosomal compartments via downstream signalling pathways. However, studies have suggested that other TK receptors as well as FGFRs and FGFs, can target the nucleus and carry out functions that might not be dependent on tyrosine kinase activity [174][175][176][177][178][179][180][181][182]. Examples of nuclear FGFs are FGF1, that stimulated DNA synthesis, and FGF2 that was associated with increased cell proliferation in glioma cells and invasion in pancreatic cancer [11,183,184]. Both FGFR1 and FGFR2 have been reported to function in the nucleus [183,185,186]. Nuclear FGFR2 was recently found to negatively regulate hypoxia-induced cell invasion in prostate cancer [187] and nuclear FGFR1 was positively corelated with pancreatic and breast cancer progression [178,179]. Although there are strong indications, it is still not fully understood how FGF(R)s travel to the nucleus and what their mode of action is once there. Several researchers have highlighted the mechanisms by which full length TK receptors translocate via the cell membrane to the nucleus. For example, upon binding of the ligands, the activated receptors get internalised to the early endosomal compartments either via the vesicular pathway or after retro-translocation from the endoplasmic reticulum (ER) to the cytosol [181,[188][189][190]. The molecular mechanism by which the receptor escapes the endosomal pathway to travel to the nucleus remains elusive and conflicting data point to different trafficking possibilities. One of the possible mechanisms for nuclear translocation of full length FGFR involves retro-translocation of FGFR from the ER/Golgi apparatus [183]. Typically, after co-translational insertion into the ER membranes, FGFR1 traffics via the vesicular transport systems through the Golgi apparatus to reach the plasma membrane [185,191]. This process may be accompanied by retro-translocation of the pool of FGFR into the cytosol, with FGFR1 undergoing retrograde transport via the sec61 channel, similarly to ER-associated protein degradation [183]. Once in the cytosol, FGFRs interact with ribosomal S6-kinase 1 and FGF2 which facilitates receptor transport to the nucleus to directly regulate gene expression [185,191]. Full length FGFR is a molecule too large to pass through the nuclear membrane via diffusion, and another mechanism involves the full-length receptor in the cytoplasm activating the importin beta pathway to enter the nucleus [176]. The nuclear receptor can then interact with other nuclear proteins to control transcription [185,192,193]. An alternative is that the nuclear trafficking of the receptor is dependent on proteolytic cleavage of the intracellular domain allowing translocation to the nucleus of the unrestricted cytoplasmic portion [179,194]. There are several mechanisms utilised by tyrosine kinase receptors to reach the nucleus, but generation of nuclear RTK fragments via alternative splicing of the receptor or proteolytic cleavage of FGFRs/RTK with caspases, secretases, granzymes and other proteases (e.g., ADAM10/15/17) [179,184,186,188], are increasingly reported. The FGF receptor can be present in a cleaved form before trafficking to the nucleus, and there are indications suggesting this proteolytic pathway might be FGFR kinase activity-dependent [179]. Previous studies indicated that Notch1 and FGFR1 can be cleaved by Granzyme B (GrB) [189]. In breast cancer cells, FGFR activation-dependent cleavage of FGFR1 generates a C-terminal fragment that can translocate to the nucleus and control the expression of target genes [179]. Nuclear FGFR1 could control the oncogenic networks involved in organ development, tissue and cell pluripotency, cell cycle, cancer related TP53 pathway, neuroectodermal and mesodermal programming networks, axonal growth and synaptic plasticity pathways [190]. Therefore, there might be a novel mechanism by which FGFR signalling can control metastatic cancer cell behaviour. This further suggests a potentially novel therapeutic target for invasive cancer treatment. Targeting FGFR Signalling in Cancer One of the main obstacles in cancer therapy is chemoresistance and radioresistance. There is evidence highlighting the possible role of the FGFR axis in the development of drug resistance. For example, overexpression of FGF2 and FGF1 are linked with both in vivo and in vitro resistance to cancer drugs such as doxorubicin, 5-fluorouracil and paclitaxel [195]. Interestingly, a pan-FGFR inhibitor (BGJ398) was able to overcome paclitaxel resistance in FGFR1 expressing urothelial carcinoma [192]. Another study identified FGFR4 as a targetable element of drug resistance in colorectal cancer [193]. Increased FGFR1 and FGF3 expression was correlated with a poor response to anti-HER2 treatment in breast cancer patients, and this was overcome using a combination therapy of FGFR inhibitors together with lapatinib and trastuzumab [196]. Overexpression of FGFR3 was also linked with tamoxifen resistant breast cancer [197]. In afatinib-resistant non-small cell lung cancer cells, overexpression of FGFR1 and FGF2 played a role in overcoming cell survival by compensating the loss of the estrogen growth factor receptor (EGFR)-driven signalling pathway [198]. In addition, gefitinib sensitivity was also restored in non-small cell lung cancer cells when FGF2 and FGFR1 were inhibited via siRNA and treatment with a small molecule inhibitor, PD173074, suggesting FGFR activation as a potential mechanism of acquired resistance to EGFR-TKs [199]. In FGFR1 amplified lung cancer, a combination therapy approach overcame resistance to treatment with an FGFR inhibitor [200]. In EGFR-dependent cancers of multiple cell lineages, FGFR3-TACC3 fusion proteins are also characterised as "naturally occurring drivers of tumour resistance" by reactivating EGFR/ERK signalling [201]. Considering all the evidence together, this highlights the importance of targeting the FGFR axis in combination therapies tailored for different cohorts of patients. Therapeutic targeting of FGFs and their receptors is a key area of drug development. Several drugs targeting FGF pathways are currently under clinical investigation (Table 1). However, abrogating FGFR signalling can be accomplished by targeting the diverse components present in the pathway, which include the ligands, receptors as well as the products of the downstream signalling pathway [61] (Figure 5). Nevertheless, converting knowledge into a treatment for patients has proven challenging as even specific inhibitors targeting FGFR have off-target effects [202][203][204]. Hence, further research is necessary to determine the mechanisms of effective targeting of FGFR signalling in cancer without obstructing its fundamental functions in healthy cells. The FGFR targeting field has progressed significantly, as novel agents inhibiting FGF ligands or using monoclonal antibodies and FGF ligand traps have been developed as well as using FGFR non-selective and selective inhibitors (Figure 4). The ATP-competitive small molecules were the first FGFR inhibitors [205,206]. PDGFR and VEGFR share comparable structural homology to FGFRs, hence these inhibitors can act as multitarget tyrosine kinase inhibitors (TKIs) as they also bind and act on the conserved ATP-binding regions. Infigratinib (a pan-FGFR kinase inhibitor) was evaluated in a phase 2 study for biliary tract carcinoma with FGFR alterations, with all responsive tumours containing FGFR2 fusions. The overall response rate for FGFR2 fusions was 18.8% and the disease control rate was 83.3% with an estimated median progression-free survival of 5.8 months [222]. Currently there are seven phase 1 and 2 clinical trials evaluating Infigratinib in gastric, adenocarcinoma, breast, advanced malignant solid neoplasm, bladder, renal pelvis and ureter urothelial carcinoma, advanced cholangiocarcinoma and glioblastoma (NCT05019794, NCT04504331, NCT04233567, NCT04972253, NCT04197986, NCT04228042, NCT02150967, NCT04424966). Most importantly, there are two phase 3 clinical trials investigating Infigratinib as a possible cancer treatment for upper tract urothelial carcinoma/urothelial bladder cancer (NCT04197986) and advanced cholangiocarcinoma (NCT03773302). In trials using Erdafitinib (another a pan-FGFR kinase inhibitor), the rate of confirmed response in advanced metastatic urothelial carcinoma was 40%, with the median duration of progression-free survival at 5.5 months, and median duration of overall survival at 13.8 months [223]. In a phase 2 study on cholangiocarcinoma patients with FGFR alterations, it was reported that the disease control rate was 83.3% and median progression free survival was 5.59 months. In 10 evaluable FGFR2+ patients the disease control rate was 100% and the median progression-free survival was 12.35 months [224]. Currently, there are nineteen phase 1 and 2 clinical trials on Erdafitinib and cancers such as breast, bladder/urinary bladder, lung, advanced solid tumours, urothelial, prostate, and multiple myeloma (NCT03238196, NCT04917809, NCT04172675, NCT02699606, NCT04083976, NCT02365597, NCT03547037, NCT03999515, NCT04754425, NCT05052372, NCT03827850, NCT03210714, NCT03473743, NCT04963153, NCT03955913, NCT03088059 NCT02925234, NCT02465060, NCT03732703, NCT03155620). There is also a phase 3 clinical trial evaluating Erdafitinib in urothelial cancer (NCT03390504). Therapeutic monoclonal antibodies have been established with the rationale that they could target FGF ligands and FGFR isoforms with a high specificity, hence offering an alternative to inhibitors that might have side effects [226,227]. Antibodies can compromise the other benefit of employing the immune system to synergise with the antitumour activity via antibody-dependent cellular cytotoxicity or complement-dependent cytotoxicity. A number of anti-FGFR monoclonal antibodies have also been considered in preclinical studies [228,229]. Human anti-FGFR3 mAb, MFGR1877S (Genentech), is a monoclonal antibody against FGFR3 and has been used against multiple myeloma and MFGR1877S and has also shown antitumour activity for overexpressed FGFR3 in preclinical models of bladder cancer [221,[229][230][231][232][233]. Phase I clinical trials of MFGR187S have been carried out in t(4;14) translocated multiple myeloma patients [233]. Furthermore, GP369 is a specific and potent anti-FGFR2b monoclonal antibody that suppresses phosphorylation and the downstream signalling induced by ligand binding. FGFR2 activated signalling in mice significantly inhibited the growth of human cancer xenografts in the presence of GP369 [234,235]. Antibodies against FGF2 and FGF8 have also shown promising results in inhibiting tumour progression and angiogenesis [236,237] A human single-chain variable fragment (ScFvs; 1A2) that binds to human FGF2 was identified via screening of a human scFv phage library [238]. This purified antibody inhibited various biological functions of FGF2, such as proliferation/growth, migration and tube formation of human umbilical vein endothelial cells and apoptosis in glioma cells in vitro [238]. An alternative method of inhibiting FGFR signalling is via a ligand trap to isolate FGF ligands preventing them from binding to and activating FGFRs [112,239,240]. FP-1039 (GlaxoSmithKline, GSK3052230) is a soluble fusion protein that consists of an extracellular FGFR1-IIIc domain fused to the Fc portion of IgG1 that inhibits the binding of FGF1, -2, and -4 to FGFR1 and has shown promising results in solid tumours [241][242][243]. Other FGF2 antagonists are small molecules such as sm27, PI-88, pentosan and pentraxin-3 [8]. Because of the ability to bind to heparin/heparan sulphate, chemical compounds mimicking heparin (i.e., suramin) could antagonise FGF2 binding and inhibit its action [244]. Peg-interferon alpha-2b was also able to suppress the plasma FGF2 level in melanoma patients with metastasis and gave a clinical response [245]. FGF2-induced angiogenesis was also inhibited by sulfonic acid polymers such as PAMPS, small molecules such as sirolimus, PI-88, thalidomide, suramin and platelet factor 4 protein (as reviewed by [246]). Not much is known about the mechanism by which FGFR inhibitors induce cell death. Recent work on endometrial cancer showed that FGFR inhibitors (Infigratinib, AZD4547 and PD173074) caused mitochondrial depolarisation, cytochrome c release and impaired mitochondrial respiration in two FGFR2-mutant endometrial cancer cell lines (AN3CA and JHUEM2). However, they did not detect caspase activation following FGFR inhibition. When they were treated with the pan-caspase inhibitor (Z-VAD-FMK) they did not prevent cell death, suggesting that the cell death was caspase-independent [247]. Bcl-2 inhibitors enhanced FGFR inhibitor-induced mitochondrial-dependent endometrial cancer cell death [247]. Interestingly, in another study, Infigratinib induced cell death in nonsmall cell lung cancer cells (H1581) by activating the caspase-dependent mitochondrial and non-mitochondrial pathway [239]. In high-grade bladder cancer cells, a combination treatment with Infigratinib and a novel histone deacetylase inhibitor (OBP-801/YM753/spiruchostatin A), inhibited cell growth and markedly induced apoptosis, by activating caspase-3, -8 and -9. Interestingly, a pan-caspase inhibitor (Z-VAD-FMK) significantly reduced the apoptotic response to the combined treatment. The combination treatment was shown to be at least partially dependent on Bim [240]. Conclusions Even though drugs targeting tyrosine kinase activity (e.g., HER2, FGFR, EGFR, VEGFR2), can prolong survival by inducing cancer regression, the lack of selectivity to a single target and/or development of drug resistance remains a problem. The heterogeneous nature of cancer, the involvement of the tumour microenvironment, together with the pleiotropic way FGFR signalling functions, highlights the need for a more personalised approach in cancer treatment and combination therapies. Experimental data and clinical trials focusing on targeting the FGFR axis have demonstrated positive outcomes. An awareness of FGFR genetic alterations or the FGFR mode of action in cancer patients (e.g., whether FGFR acts via a paracrine or autocrine mechanism in a specific tumour) is important for tailoring combinations of targeted therapies aiming at the FGFR axis. For example, using small molecule FGFR inhibitors, RNA based drugs, FGF traps and humanised/human anti-FGFR monoclonal antibodies in combination with targeting the immune system and/or other signalling pathways. A better understanding of FGFR biology could also help in identifying the mechanisms of drug resistance to FGFR inhibitors and facilitating their bypass. Developing diagnostic assays to screen patients for FGF and FGFR status for a targeted approach might help improve treatment efficacy.
v3-fos-license
2020-10-28T19:07:44.415Z
2020-10-09T00:00:00.000
234951749
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.nature.com/articles/s41398-021-01470-z.pdf", "pdf_hash": "4f142560807a77f9569aa8f0091ee6d1cef1fbd5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:149", "s2fieldsofstudy": [ "Medicine" ], "sha1": "515b3b5971c4a932b64b74354d1e87db71df9cc3", "year": 2021 }
pes2o/s2orc
Genetic risk of clozapine-induced leukopenia and neutropenia: a genome-wide association study Background Clozapine is considered to be the most effective antipsychotic medication for schizophrenia. However, it is associated with several adverse effects such as leukopenia, and the underlying mechanism has not yet been fully elucidated. The authors performed a genome-wide association study (GWAS) in a Chinese population to identify genetic markers for clozapine-induced leukopenia (CIL) and clozapine-induced neutropenia (CIN). Methods A total of 1879 patients (225 CIL cases, including 43 CIN cases, and 1,654 controls) of Chinese descent were included. Data from common and rare single nucleotide polymorphisms (SNPs) were tested for association. The authors also performed a trans-ancestry meta-analysis with GWAS results of European individuals from the Clozapine-Induced Agranulocytosis Consortium (CIAC). Results The authors identified several novel loci reaching the threshold of genome-wide significance level (P < 5 × 10−8). Three novel loci were associated with CIL while six were associated with CIN, and two T cell related genes (TRAC and TRAT1) were implicated. The authors also observed that one locus with evidence close to genome-wide significance (P = 5.08 × 10−8) was near the HLA-B gene in the major histocompatibility complex region in the trans-ancestry meta-analysis. Conclusions The associations provide novel and valuable understanding of the genetic and immune causes of CIL and CIN, which is useful for improving clinical management of clozapine related treatment for schizophrenia. Causal variants and related underlying molecular mechanisms need to be understood in future developments. Introduction Schizophrenia is a severe mental disorder accompanied by considerable morbidity and mortality, which affects~1 % of the population worldwide 1 . The etiology of schizophrenia is still not well understood. Environmental and genetic factors play important roles in the development of schizophrenia 2 . The heritability of schizophrenia is estimated to be 60-80% 2,3 . Antipsychotic medications are commonly used to treat patients with schizophrenia, but responses to these drugs vary widely 4 . Although antipsychotics can relieve symptoms of psychosis,~75% of patients discontinue their therapy due to adverse effects in two years 5 . Clozapine is considered to be the most effective antipsychotic medication for schizophrenia and has a distinctive pharmacological profile in treatment-resistant schizophrenia 6 . However, it is also associated with several adverse effects such as weight gain, metabolic dysfunction, cardiovascular disease and leukopenia. Leukopenia is defined as white blood cell (WBC) count less than 4,000 cells per microliter. Neutropenia and agranulocytosis are different types of leukopenia with an absolute neutrophil count (ANC) less than 1,500 and 500 cells per cubic millimeter (mm −3 ), respectively. Clozapine-induced agranulocytosis (CIA) is a severe leukopenia that may be life-threatening, first reported in Finland in 1974 7,8 . Among patients taking clozapine, the cumulative risk of neutropenia is 3.8% and for agranulocytosis is 0.9% 9 . There is evidence of a genetic contribution in the onset of clozapine-induced leukopenia (CIL), neutropenia (CIN) and CIA 10 ; however, the underlying mechanism remains unclear and may involve multiple genes. The HLA (human leukocyte antigen) system, which locates in the major histocompatibility complex (MHC) region, has been reported to be associated with CIL [11][12][13] . HLA genes such as HLA-B and HLA-DQB1 were reported to be associated with CIL by Clozapine-Induced Agranulocytosis Consortium (CIAC) and many other researchers [11][12][13] . Previous genome-wide association studies (GWAS) and wholeexome sequencing studies have indicated that other genes such as SLCO1B3, SLCO1B7, UBAP2 and STARD9 also play important roles in the pathophysiology of CIN 12 . Most studies of CIN have focused on individuals of European ancestry. To generalize the results, some recent studies have begun to investigate patients from different ethnic groups. Ancestry-based differences in alleles may be part of the reason for different prevalence of CIL; for example, Duffy-null genotype has been found to be associated with benign neutropenia in patients with African ancestry 14 , and some previously reported SNPs have not been replicated in individuals of Japanese ancestry 12,15 . Clozapine has been available for Chinese patients since the 1970s. It was once a commonly used antipsychotic drug in China, prescribed for~25-60% of patients with schizophrenia 16,17 , and the reported prevalence of CIL in China is 3.92% (0.21% for CIA) 16,18 . In this study, we report what we believe to be the first GWAS of CIL in a Chinese population, seeking to identify genetic determinants and provide more information on possible underlying etiology. Study design and participants We analyzed genome-wide data of patients with schizophrenia receiving clozapine treatment from a Chinese cohort. Patients were recruited for this study from Wuhu Fourth People's Hospital and Guangxi Longquan Mountain Hospital from 2007 to 2012. The clinical interviews were conducted by two independent psychiatrists. All patients had diagnoses of schizophrenia according to the Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria. Those patients with treatment-resistant schizophrenia 19 were prescribed clozapine and 1,983 patients were included in this study. Those who developed WBC < 4,000 mm −3 during treatment with clozapine were defined as clozapine-induced leukopenia (CIL) cases (n = 242). Of these, 46 cases also developed ANC < 1,500 mm −3 , and were thus defined as clozapine-induced neutropenia (CIN) cases. All controls (n = 1,741) received clozapine without developing WBC < 4,000 mm −3 or ANC < 1,500 mm −3 . We also performed a genome-wide meta-analysis of our Chinese sample with individuals of European ancestry from the Clozapine-Induced Agranulocytosis Consortium (CIAC) 11,12 . In accordance with the principles in Declaration of Helsinki, the study was approved by the Ethics Committee at the Bio-X Institutes of Shanghai Jiao Tong University and the written informed consent was obtained from each participant. It is confirmed that the research complied with the Guidance of the Ministry of Science and Technology (MOST) for the Review and Approval of Human Genetic Resources. Procedures and statistical analysis Details on genotyping and quality control procedures for the genome-wide analysis of Chinese samples are described in Supplementary Material. We performed primary genome-wide association analysis for CIL To minimize the effect for population stratification, we performed association analysis separately for two sample sets (according to genetic origin criteria), and then combined the summary in the meta-analysis. For the common variants (minor allele frequency [MAF] ≥ 1%), including both genotyped and imputed variants, association analyses were performed by logistic regression using imputed dosage files and the meta-analysis via METAL with a fixed-effects model. Heterogeneity among the sample collections in the meta-analysis was measured with the I 2 index and the p value, which was calculated with Cochran's Q test. For the rare variants, we restricted to the genotyped variants only, association and meta analyses were carried using PLINK with the Cochran-Mantel-Haenszel method. We conducted secondary analyses on a subset of the more severely affected cases with ANC < 1,500 mm −3 (CIN). Controls were still those who received clozapine without developing WBC < 4,000 mm −3 . Those who had a test result with ANC ≥ 1,500 mm −3 and WBC < 4,000 mm −3 were excluded from the secondary analysis. All analyses performed were consistent with methods described above for CIL analysis. We performed the trans-ancestry meta-analysis of our CIN results with data from the CIAG study 11,12 for common variants. We adopted a fixed-effects model implemented in RICOPILI 20 for the analysis. We annotated the genome-wide significant variants and their proxies (r 2 ≥ 0.8 in EAS of 1000 Genomes Project Releases Phase 3) to explore putative causal variants and genes underlying these association signals. We also searched for evidence that the common variants were associated with expression of a particular gene in several expression quantitative trait locus (eQTL) datasets: the Genotype-Tissue Expression (GTEx, v7) 21 , immune cells 22 and lymphoblastoid cell lines (LCLs) 23 . Potential functional effects of the sentinel variants were predicted by the GWAVA (Genome Wide Annotation of VAriants) 24 , a web-based tool to prioritize the functional variations based on a wide range of annotations of noncoding elements (such as ENCODE/GENCODE, evolutionary conservation and GC-content). Dual-luciferase reporter assays An in vitro functional assay was used to investigate alleledependent effects of the prioritized variants by GWAVA on gene transcription. Experiments were performed with human embryonic kidney (HEK293) cell line. Luciferase reporter constructs were cloned by using the pGL3promoter vector (Promega, Madison, WI, USA) as a backbone. For each variant, two DNA fragments containing risk or nonrisk allele were produced and all constructs were verified via Sanger sequencing. The pGL3 reporter, risk and nonrisk vectors were transfected into HEK293 cells. Luciferase activities were assessed at 48-h posttransfection according to the manufacturer's protocols (Promega). The activity of firefly luciferase was normalized to that of Renilla luciferase to control for variations in the transfection efficiency between different wells. All assays were performed in three or more biological replicates in independent experiments, and two-tailed t-tests were performed to analyze statistical differences between experimental groups. Further details are provided in the Supplementary Material. Results From Jan 1, 2007 to Dec 31, 2012, 1,983 patients with treatment-resistant schizophrenia, who were prescribed clozapine, were enrolled from two Mental Health Centers in China for genome-wide genotyping; 104 (5%) failed quality control (see Supplementary Material), of whom 31 did not meet array QC metrics criteria, 19 did not meet call rate criteria, 23 did not meet identity-by-descent criteria, and 31 were excluded as population outliers after a principal component analysis (see Supplementary Fig. 1) Fig. 1, Supplementary Material). After imputation and quality control, genotypes for 7,966,570 variants were available for downstream analyses. We performed genome-wide association analysis for 7,936,469 common (MAF ≥ 1%, both genotyped and imputed, see Fig. 2) and 30,101 rare (MAF < 1%, genotyped only, see Supplementary Fig. 2) variants. In the association analysis of 225 CIL cases and 1,654 controls for common variants, one locus near TRAC on chromosome 14 was identified to be associated with CIL genome-wide significantly (five variants, P < 5 × 10 −8 , see Fig. 2), and the strongest association was at rs377360 (MAF = 18.9 %, OR = 2.19, 95% CI = 1.66-2.89, P = 2.58 × 10 −8 , see Table 1, Fig. 3 and Supplementary Table 2). As shown in the Q-Q plot, the genomic inflation factor (λ = 0.976) indicated good control of population stratification (see Supplementary Fig. 3). None of the genome-wide significant variants and their proxies (r 2 ≥ 0.8 in EAS of 1000 Genomes Project Releases Phase 3) fell within coding regions, except rs3701 which overlaps a 3' untranslated region (3'-UTR) of TRAC (see Supplementary Table 3). Based on our eQTL analysis, most were associated with gene expression levels of multiple genes (P < 0.05) in whole blood and immune cells (see Supplementary Table 4). The target genes included T cell receptor alpha and delta variable and constant segments coding genes (such as TRAC), and also other genes (such as DAD1), as shown in Supplementary Table 4. For the rare variants, we identified five genome-wide significant variants from two regions (see Supplementary Fig. 2), which were indexed by rs4773794 in DCT on chromosome 13 (MAF = 0.18%, OR = 12.79, P = 3.01 × 10 −10 ), and rs10512698 in EEFSEC on chromosome 3 (MAF = 0.88%, OR = 4.79, P = 1.09 × 10 −8 ). All these lead variants showed significant associations in both datasets (P < 0.05, see Supplementary Table 2) and no evidence for heterogeneity across datasets were observed (Het P > 0.05, see Fig. 4 and Supplementary Fig. 4). For the CIN analysis of 43 cases and 1,654 controls, we identified three associated common variants, having MAF between 1% and 5%, that achieved genome-wide significance. These are from two regions and indexed by rs116982346 (MAF = 1.90%, OR = 19.96, P = 2.15 × 10 −8 ) on chromosome 3 and rs73482673 (MAF = 1.10%, OR = 12.05, P = 3.30 × 10 −8 ) on chromosome 9 (see Fig. 3 and Table 1). All these variants showed significant associations in both datasets (P < 0.05, see Supplementary Table 2) and no evidence for heterogeneity across datasets was observed (Het P > 0.05, see Fig. 4). All the genome-wide significant variants and their proxies are in intergenic regions, and the nearest coding genes are TRAT1 (rs116982346) and ELAVL2 (rs73482673) (see Fig. 4). One of them was associated with gene expression level of ELAVL2 in lymphoblastoid cell lines at P < 0.05 (see Supplementary Table 4). We also found several CIN associated rare variants (see Supplementary Fig. 2), which were indexed by rs9808117 in HECW2 on chromosome 2 (MAF = 0.64%, OR = 9.10, P = 9.82 × 10 −9 ), rs373695 in Table 2). Some proxies of the genome-wide significant variants are variants in coding regions, such as 3'-UTR in EPN2 and a missense variant in B9D1 for rs7501702, 3'-UTR in REC114 and a synonymous variant in HCN4 for rs8024434 (see Supplementary Table 3). Five prioritized variants (rs377360 and rs4773794 for CIL; rs9808117, rs8024434 and rs7501702 for CIN) by their highest GWAVA scores (see Supplementary Table 5) were tested for transcriptional activity using dualluciferase reporter assay (see Supplementary Table 6). Luciferase assays showed the DNA segment containing rs377360-T (risk for CIL) increased transcriptional activity in HEK293 cells (49% increase [SD 16.6] comparing to nonrisk allele, P < 0.001, Fig. 5). Consistent with GWAVA prediction, we found rs4773794-G (protective for CIL) was associated with an increase of transcriptional activity in HEK293 cells most significantly (50% increase [SD 27.2] comparing to risk allele, P < 0.001, Fig. 5). For the CIN associated variants, mild or little affections on transcriptional activity were observed (see Fig. 5). Discussion To our knowledge, our study is the first GWAS of CIL and CIN in individuals of Chinese ancestry and identified nine novel genome-wide significant loci. Five of the nine index variants identified in our GWAS mapped to the intron of a protein-coding gene, and all except one of the others had a protein-coding gene within 15 kb. The trans-ancestry meta-analysis of the Chinese and European ancestry identified one risk locus which was close to genome-wide significance. Mechanisms underlying CIL and CIN have not yet been fully elucidated, although an immune reaction has been hypothesized for a long time as implicated 10 . We identified two T cell related genes in this study, TRAC (T cell receptor alpha constant) and TRAT1 (T Cell Receptor Associated Transmembrane Adaptor 1). The TRAC gene locating on 14q11-12 encodes the constant region of the T cell receptor (TCR) alpha chain. The alpha loci are synthesized by variable, joining, and constant segments. TCR recognizes antigen fragments as peptides bound to MHC molecules, which are important for thymic selection. Deficiency of MHC may cause abnormal immune function and result in a lack of maturation of the corresponding T cells 25 . Diseases associated with TRAC . c) rs73482673. d) rs11753309. Area of the square represents the weight of each statistical sample; horizontal lines represent OR and 95% CI in two independent datasets. The diamond represents the total 95% CI estimated in meta-analysis. OR, odds ratio; CI, confidence interval. include TCR alpha/beta deficiency. It has been reported that TRAC mutations cause a human immunodeficiency disorder characterized by a lack of TCR αβ + T cells 26 . TRAT1 also plays an important role in the immune process. Among the predominantly downregulated cell surface antigens, TRAT1 along with ZAP70, LCK and TCR associated tyrosine kinases are involved in TCR activation, and were all simultaneously downregulated 27 . Although we did not identify susceptibility loci reaching the level of genome-wide significance after the trans- . e) rs7501702. HEK-293 cells were transfected with luciferase reporter constructs for both alleles of each variant. Each construct was transfected five times and assayed in quintuplicate in each experiment. The luciferase activities, normalized with a cotransfected Renilla activities, were expressed by taking the normalized luciferase activity of the vector (pGL3-Promoter) to be 1. Data represent means±SDs. P values were determined by using the t-test. Significance indicators will be written above the bars: *** if p < 0·001, ** if p < 0·01, * if p < 0·05,ns otherwise. ancestry meta-analysis, we found a near-significant risk locus for CIN near HLA-B. HLA-B is a paralog of HLA class I heavy chain, which is anchored in the membrane and plays an important role in the immune system by presenting peptides derived from the lumen of the endoplasmic reticulum. HLA-B was reported as a susceptibility gene of CIA in Caucasian population 11 , while HLA-B*59:01 was suggested as a risk factor for CIA in the Japanese population 28 . Most studies of CIL have been conducted in Caucasian populations, and it may not be possible to replicate identified variants across different races and populations for many reasons. However, associations of variants from GWAS have generally added evidence for immune-related mechanisms. The biological activation or the conversion of stable metabolites to chemically reactive nitrogen ions could represent another causal pathway [29][30][31] . However, hypotheses of toxicity and immune regulation are not necessarily mutually exclusive. The combination of these reactive metabolites can result in CIL, CIN or CIA through direct toxicity or via initiation of immune mechanisms or both. Clozapine may potentially be oxidized into toxic nitrenium ions, which bind to neutrophils and cause an autoimmune reaction or induce cell death. In this study, we also found several other compelling candidate genes. NEO1 at 15q24.1 encodes a cell surface protein composed of four N-terminal immunoglobulin-like domains, which may be involved in cell growth and differentiation and cell adhesion. Neo1 contributes to the inflammatory response and resolution mechanisms 32 . Genetic deletion or functional inhibition of Neo1 leads to reduced neutrophil recruitment and shortened the neutrophil lifespan by increasing apoptosis 33 . DCT at 13q32.1 encodes dopachrome tautomerase and participates in melanocytes pigment biosynthesis with tyrosinase and tyrosinase-related protein 1 34 . Meanwhile it also plays a role in oxidative stress and apoptotic stimuli respondence 35,36 . An experiment in mice found that the spatiotemporal distribution of DCT expression was related to cortical neurogenesis during embryonic development 37 . Diseases associated with DCT include microphthalmia and melanoma. Previous studies have proved that a large portion of chromosome, spanning 68 Mb from 13q12 to 13q34, could be linked to susceptibility to schizophrenia 38,39 . EEFSEC located at 3q21.3, encoding selenocysteine-tRNA specific eukaryotic elongation factor, which is essential for the synthesis of selenoproteins 40 . Selenoproteins contain several proteins with different functions, some of which play important roles in antioxidant defense preservation and anti-inflammatory regulation 41 . Polymorphisms or mutations in EEFSEC could account for a variety of disorders, including neuropsychiatric disorders and immune dysfunctions 42 . We hypothesize that EEFSEC may participate in the mechanism of CIL through selenoproteins. Many selenoproteins participate in redox signaling, redox homeostasis, and antioxidant defense by the action of glutathione peroxidase 43 . The amount of selenoproteins and glutathione peroxidase decreases under oxidative stress conditions, and the activity of selenium-dependent glutathione peroxidase has been found to be lower in patients treated with clozapine 44 . Activation of clozapine and/or its metabolites can produce electrophilic nitrenium ions, which might either covalently bind to neutrophils such as glutathione to cause cell death or cause oxidative stressinduced neutrophil apoptosis 45,46 . We feel it is reasonable to assume that the risks of CIL and CIN are genetically complex, involving several genes. The application of GWAS and other modern genomic technologies gives us a better understanding of the genetic basis of CIL and CIN. Our study is the first GWAS of CIL and CIN in individuals of Chinese ancestry and identified several novel genome-wide significant loci. However, there are still some limitations in our study. First, many SNPs detected were noncoding variants with unknown effects, which is a common problem with the GWAS approach. There may also be potential Winner's curse bias in estimating the average genetic effect of a set of rare variants and identifying statistically significant associations. Therefore, further functional studies are needed to verify the related biological mechanisms. Second, since some conditions such as circadian rhythm can affect the amount of peripheral blood neutrophils in patients, other related factors should also be taken into account. Finally, an important challenge is that there is relatively rare incidence of CIN and even rarer incidence of CIA in the Chinese population. In fact, only six CIA cases met the criteria in our study, which may be due to the success of the monitoring system, since once leukopenia is detected, clozapine will be discontinued. It is therefore impossible to know which patients would continue to develop CIA. The rare incidence limits the availability of suitable patients and thus the sample size for research. Therefore, replication of larger sample sizes is critical for clinical applications. Clozapine is the only effective medication for treatment-resistant schizophrenia, but its use is limited due to the risk of leukopenia, particularly agranulocytosis. Genetic and immune factors are likely to play an important role in determining risk for such adverse reactions. Our results provide novel and valuable understanding of the genetic and immune causes of CIL and CIN. Causal variants and related underlying molecular mechanisms need to be understood in future developments, which will be key to improving diagnostic and therapeutic capabilities in treatment-resistant schizophrenia.
v3-fos-license
2019-04-19T13:09:55.320Z
2013-12-31T00:00:00.000
62798100
{ "extfieldsofstudy": [ "Mathematics" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://downloads.hindawi.com/journals/aaa/2013/486178.pdf", "pdf_hash": "1531142a725265d52ad958981df887591a6d6be8", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:152", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "1531142a725265d52ad958981df887591a6d6be8", "year": 2013 }
pes2o/s2orc
Neurodynamics of up and down Transitions in Network Model and Applied Analysis 3 where h ∞ = (1 + e (Vi−Th)/σh) −1 , Introduction Different patterns of brain activity can give rise to different behavioral states of the animals.Neural electrophysiology experiments show that during slow-wave sleep in the primary visual cortex of anesthetized animals [1][2][3] and during quiet wakefulness in the somatosensory cortex of unanesthetized animals [4,5], the membrane potentials make spontaneous transitions between two different levels called up and down states [6].Transitions between up and down states can also be evoked by sensory stimulation [1,4,[7][8][9][10][11].An interesting result of these transitions is that sensory-evoked activity patterns are similar to those produced spontaneously.A hallmark of this subthreshold activity is a bimodal distribution of the membrane potential [12].However, why these transitions occur or whether this spontaneous activity engages in brain functions or not remains unclear.In fact, we know little about expressions of neuron membrane potentials and interactions between neural networks, especially the relationship between neural coding modes and cognitive behaviors.So our purpose is to try to understand the inner connection between the up and down transitions of a single neuron and that of neural network. Recent findings show that activation of a single cortical neuron can significantly modulate sensory and motor outputs [13,14].Furthermore, repetitive high frequency burst spiking of a single rat cortical neuron could trigger a switch between the cortical states resembling slow-wave and rapideye-movement sleep [15].This is reflected in the switching of membrane potential of the stimulated neuron from high frequency and low amplitude oscillations to low frequency and high amplitude ones or vice versa.At the same time, cortical local field potential (LFP) changes over time.Here we use local field potential (LFP) to describe the state of the whole cortex [16][17][18][19].Therefore, the up and down states of single neuron reflect distinct global cortical states, which resemble slow-wave and rapid-eye-movement sleep, respectively [20][21][22].All of these results point to the power of single cortical neurons in modulating the behavior state of animals [15].Here, one single neuron affects the whole network status by impacting other coupling neurons. We have started our research on a single neuron, studied the electrophysiological phenomenon of status transitions, and obtained the bistability and spontaneity that is similar to experiment observation.In addition, we found that these up and down transitions show unidirectional or bidirectional changing with different parameters.Bistability means that the neuron stays in one state before stimulation and turn to another state after stimulation.These two states are called up state and down state, respectively.That is to say, the neuron can switch between up and down states.And directivity refers to the fact that it is not arbitrary to switch from one state to another.In some cases, transition can only occur from up state to down state, while it occurs from down state to up state in other cases.Spontaneity, the periodic spontaneous activity of neural membrane potential, is the most significant feature of the transition. This paper tries to further explore neural dynamic mechanism of up and down transitions in a neural network based on the above results.This work will lay a foundation for studying the relationship between neural coding and cognitive behavior.We focus on the dynamic process of the average membrane potential of a small neural network that consists of 25 neurons and switches between up and down states.And we observe the difference and distinguish between transitions in network and one single neuron by numerical simulation and theoretical analysis.Then we try to know what happened to the appearance of behavior states and the inducement of brain cognition.What we want to highlight is that how great the effect of the emerged relationship between behavior states and cognition and the ratio of activated neurons on up and down transitions is.This is also the subject for our further study. Network Model There are different kinds of complicated connection between neurons.According to the topology, some scholars proposed chain link, ring link, grid link, and so forth [23].However, the internal connections between neurons are much more complicated than those above connections.This paper constructs a dynamical network model that consists of 25 neurons based on previous study [24].In this network, any one neuron connects to any other neurons in the network.That means every two neurons in the network are coupled with the connection strength asymmetrically and obeying standard uniform distribution [25]. The coupling strength between neurons can be expressed with matrix variable, denoted by .Then we have where , represents the coupling strength form neuron to neuron .Absolutely, neurons do not couple with themselves, so these coupling strength denoted by , equals to zero. To illustrate the state changing of the whole network, we use the changing of average membrane potential to describe the changing of local field potential (LFP), which means that we average membrane potentials of every neuron in the network to express LFP. The dynamic model of one neuron in the neural network is based on H-H equations and is described by (2) to (8).This dynamic model consists of three ionic currents and synaptic currents which come from surrounding neurons.The ionic currents contain an instantaneous, inward current (sodium current), a slow h-like current [26,27], and an outward current (a potassium current and a leak current).Two types of persistent inward, persistent sodium, and persistent calcium have been characterized in Purkinje cells [28,29].Somatic Purkinje cell bistability has been associated with persistent sodium whereas dendritic bistability has been shown to result from persistent calcium conductance.Here we use persistent sodium in our model for simplicity but it is likely that it is the combination of these two currents that enables the bistability [30]. On the basis of previous research, we propose the following neural network model to study the characteristics, bistability, directivity, and spontaneity, of the up and down transitions that have been observed in electrophysiology experiments.Thus, we clarify the neural dynamic mechanism of the up and down transitions in neural network.The current equation for the model is Here, the ionic currents are as follows: where The synaptic current is The synaptic current of the th neuron is a sum of effects from all the other neurons in the network, so this kind of current plays a key role in coupling every two neurons in the whole network.The changing activity of one neuron affects the whole network states changing through this way. There are two dynamic variables: membrane potential and the inactivation term of the h-current ℎ , when we discuss the bistability and directivity.But when studying the spontaneity, we need another variable called the inactivation term of potassium current .The dynamics of the inactivation terms of h-current and potassium current are where In these equations, represents membrane potential of the th neuron, while ℎ , , replace a sodium current, a slow h-like current, and a potassium current and a leak current of the th neuron, respectively.Similarly, Na , ℎ , , , respectively, represent the sodium conductance, the slow h-like conductance, the potassium conductance, and the leak conductance, and Na , ℎ , , are the corresponding reversal potentials.The inactivation term of the sodium current, the h-like current, and the potassium current are described by ∞ , ℎ ∞ , ∞ , and dynamic variables ℎ , , with the synaptic time constant ℎ , .And , , ℎ , ℎ , , , 0 , , , , , , are constants. Bistability. When we studied the single neuron model, we found that transitions between up and down states can be induced by two different kinds of stimulus.One is to add brief outward current pulses; another is to improve the sodium conductance to a certain value instantaneously.Now, we research the neural network in the same way to try to find out that there exist the similar phenomenon or not which agrees with electrophysiology experiment results. In the period of 10 seconds, we add a pulse current which lasts 0.1 second every two seconds, with the current intensity 7.2 A/cm 2 .The results are shown in Figure 1.We find that the average membrane potential switches between the up state (about −45 mV) and the down state (about −65 mV). In the period of 10 seconds, we add the stimulation that lasts 4 ms every one or two seconds, leading to the intensity of sodium conductance changing from 0.06 mS/cm 2 to 1.2 mS/cm 2 instantaneously.The results are shown in Figure 2. We find that the average membrane potential switches between the up state (about −45 mV) and the down state (about −65 mV) when adding the same stimulation.And these transitions are a little bit complex: the membrane potential rises up to 0 mV instantaneously but then drops quickly. So from the above two results, we find that this dynamic model can describe the bistability of up and down transitions of neural membrane potential in the neural network.That means there are two stable states for neural network, with other states unstable.The network can stay in one of these two stable states without any input.When the neuron in the network is stimulated, which destroys its original stability, it can switch its state from one to another to adjust itself to a new balance.These two states are called the up state and the down state, respectively.That is to say, the up and down transitions can be modulated by external stimulations.The ionic movement between inside and outside of the membrane may be the mechanism of the transitions.When sodium conductance increases to a certain level, it causes slight depolarization, activating the sodium channel with sodium move into cells, which increases the range of the polarization.In return, the larger the range of depolarization occurs, the more the sodium channels are activated and the more the sodium moves into cells.When it arrives to the peak of membrane potential, the sodium channel is inactivated and the h-like channel is activated, which leads to repolarization of the membrane potential.When the membrane potential reduces to about −45 mV, h-like channel is inactivated.At this point, a new balance between the outflow of potassium and the inflow of sodium begins.Namely, membrane potential stays in a stable state.According to the different extent of the h-like channel inhibition, the membrane potential stays in the up state (about −45 mV) or the down state (about −65 mV). Directivity. In the model of a single neuron, directivity of the transition is modulated by potassium conductance.We found that when = 0.1 mS/cm 2 , membrane potential can transit both from the down state to the up state and from the up state to the down state, when = 0.09 mS/cm 2 , membrane potential can only transit from the down state to the up state, and when = 0.105 mS/cm 2 , membrane potential can only transit from the up state to the down state. In the model of neural network of this paper, we also do research on the directivity.And we find that the changing of sodium conductance can modulate the directivity of the transitions as well as potassium conductance.Figures 3-7 describe different transition modes adjusted by different values of potassium conductance and sodium conductance.The tops of Figures 3-7 are average membrane potential of the neural network, namely, up and down transitions, while the bottoms are phase plane for the mean of two kinds of dynamic variables ℎ and in the model, denoted by mean , ℎ mean . mean is average membrane potential of all the neurons in the network.ℎ mean is average inactivation rate of all the h-like channel in the network.The red solid line shows all the points that ℎ = 0, the blue dot line shows all the points that = 0, and the intersection of these two lines is stable point of the system.In other words, the two points are stable states of the network, and other points in the plane are unstable.That means, the system will stay in any one of the two stable points after a long run.The green solid line in the figure presents the transit process from one stable point to another. Figure 3 shows that when = 0.1 mS/cm 2 , membrane potential can transit from the down state to the up state by adding a stimulation that increase sodium conductance to Na = 1.2 mS/cm 2 instantaneously.With the same stimulation, it also can transit from the up state to the down state.So the transitions are bidirectional on condition that = 0.1 mS/cm 2 and Na = 1.2 mS/cm 2 .The h-V phase plane further shows that the system transmits between the two stable states. We can observe the changing of up and down transitions of the whole network by making some changes on the potassium conductance while keeping sodium conductance unchanged; namely, Na = 1.2 mS/cm 2 .The results are shown in Figures 4-5. Figure 4 represents that when = 0.09mS/cm 2 , the average membrane potential can transit from the down state to the up state by adding a stimulation that increase sodium conductance instantaneously.But with the same stimulation, the average membrane potential always stays in the up state without any change.In other words, the transitions that are unidirectional vary from the down state to the up state in the circumstances that = 0.09 mS/cm 2 .The h-V phase plane also shows that the system can only vary from the lower membrane potential stable point to higher one and then move around the higher one periodically.Figure 5 reveals that when = 0.105 mS/cm 2 , the average membrane potential can transit from the up state to the down state by adding a stimulation that increase sodium conductance instantaneously.However, with the same stimulation, the average membrane potential always stays in the down state without any change.In other words, the transitions that are unidirectional vary from the up state to the down state under the circumstances that = 0.105 mS/cm 2 .The h-V phase plane also presents that the system can only vary from the higher membrane potential stable point to lower one and then move around the lower one periodically. Accordingly, we can also observe the changing of up and down transitions of the whole network by making some changes on the sodium conductance while keeping potassium conductance unchanged; namely, = 0.1mS/cm 2 .The results are shown in Figures 6-7. Figure 6 presents that when Na = 0.8mS/cm 2 , the average membrane potential can transit from the down state to the up state by adding a stimulation that increase sodium conductance instantaneously.But with the same stimulation, the average membrane potential always stays in the up state without any change.In other words, the transitions that are unidirectional vary from the down state to the up state in the circumstances that Na = 0.8 mS/cm 2 .The h-V phase plane also shows that the system can only vary from the lower membrane potential stable point to higher one and then move around the higher one periodically. Figure 7 reveals that when Na = 2 mS/cm 2 , the average membrane potential can transit from the up state to the down state by adding a stimulation that increase sodium conductance instantaneously.However, with the same stimulation, the average membrane potential always stays in the down state without any change.In other words, the transitions that are unidirectional vary from the up state to the down state under the circumstances that Na = 2 mS/cm 2 .The h-V phase plane also presents that the system can only vary from the higher membrane potential stable point to lower one and then move around the lower one periodically.The above results reveal that this dynamic model can well describe the bidirectional or unidirectional characteristic of up and down transitions of neural network when stimulated by certain stimulus.These results accord with the results of a single neuron model.Transitions in the network can be bidirection from the up state to the down one and vice versa.And also may be single direction from the up state to the down one, or only from down state to the up one according to the different level of conductance. Spontaneity. In the discussion of bistability and direction of the model of a single neuron, we should introduce the input of synapse to generate the up and down transitions.Is a neuron still able to up and down transit, if there is no input of synapse?Virtually, in vivo or in vitro experiments of animals show that the potential of neural membrane can transit between up state and down state spontaneously and periodically.By increasing the variable of the inactivation of a potassium conductance rate in the original model, we can obtain the result that is identical to the experimental result. In this paper, we introduce the dynamic variable b, the inactivation rate of potassium conductance of each neuron, to study the spontaneous transitions of neural networks. The calculated results shown in Figure 8, are case without external stimuli showing that the average membrane potential transit spontaneously and periodically, while the distribution graph illustrates the distribution of the average membrane potential, a two-peak distribution, indicating the two stable state of up and down transitions of membrane potential. By adding the interval of 1 or 2 seconds and lasting time of 4 ms stimuli on this spontaneous network model, the intensity of sodium conductance increases rapidly from 0.06 mS/cm 2 to 1.2 mS/cm 2 ; the computed results are shown in Figures 9-11. The tops of Figures 9(a) and 10(a) show the changes of membrane potentials after adding stimuli, respectively, and the bottoms show the corresponding distributions of membrane potentials. The simulating results of each neuron stimulated in the network are shown in Figure 9. Compared with the case without stimuli, after adding stimuli, the spontaneous transition of the whole network stops.Because of the disruption brought by the outer stimuli, the neurons, which should have been able to transit or transfer to another stable state or to the original stable state of itself.However, for the whole network, the instant time for transition is different for each neuron, so the ability of network transition is not well obtained just by averaging the membrane potential.As a result, all the transitions we could see are caused by external stimuli, on the contrary, not all the stimuli can generate transition.For instance, the stimulus at 8 s fails to generate the state transition, which indicates that these transitions have something to do with the state of neural network.The network in a down state is more susceptible to external stimuli and transits, while in the up state it is not susceptible to external stimuli, since the network has a strong self-stability.The simulating results of some neurons stimulated in the network are shown in Figure 10.Compared with the results in Figure 9, this network maintains its spontaneous transition ability; that is to say, it transits at the moment without any stimulus.On the other hand, the input of synapse is also able to let the state of the network transit and each stimulus leads to a state transition.Because there is coupling among the neurons, when a neuron is stimulated, such stimulus is certainly transmitted to everywhere else in the network by coupling, so as to change the state of the whole network. To observe this transmission, Figure 11 shows the membrane potential of a neuron that is directly and indirectly stimulated, respectively.The results of the their potential are similar, since when a neuron is directly stimulated, its membrane potential changes correspondingly, and according to (8), this change transmits to others without delay.This is one of the aspects for our further improvement in the future. The obtained results illustrate that this dynamical model dose can depict the phenomenon of the spontaneous and periodical transitions.By adjusting the number of the stimulated neurons, the situation of transition of a network differs.When every neuron is stimulated, the spontaneous transition of a network disappears, and the external stimuli play an important role on transitions.When a stimulus is added to a single neuron, besides the spontaneous transitions, the network is also able to respond to the external stimuli and transit.Such transmission between the coupling neurons is very fast and has no delay. Conclusion This paper constructs a dynamical network model that consists of 25 neurons which can show the up and down transitions, describing three characteristics, bistability, directivity, and spontaneity, of up and down transitions.The main conclusions are as follows. (1) This dynamic model can describe the bistalility of up and down transitions of the neural network modulated by external stimulations and sodium conductance. (2) The dynamic model can describe the bidirectional or unidirectional characteristic of up and down transitions of the neural network controlled by potassium conductance and potassium conductance. (3) The dynamic model can describe periodic spontaneous transitions between the up and down states in absence of input and transitions will become complex when adding synaptic input. The above conclusions are similar to the results of up and down transitions of a single neuron, since the characteristic of a single neuron's bursting dominates the real activities of the neural networks and the dynamic of a single neuron represents the behavior of the whole networks.In this paper, the study of up and down transitions is proposed as a preparation for the further scope of large-scale neural population and up and down transitions of network behaviors, so as to understand the effect of a single neuron's transitions on network behaviors under the condition of coupling of neural population as a sort of foundation of the research of the dynamical mechanism of neural spikes between a single neuron and the networks behaviors. Figure 9 :Figure 10 : Figure 9: Adding stimuli on each neuron of the spontaneous model to rapidly increase the sodium conductance. Figure 11 : Figure 11: Membrane potential of a neuron is stimulated directly or indirectly, respectively.
v3-fos-license
2021-08-24T13:18:52.254Z
2021-08-20T00:00:00.000
237271163
{ "extfieldsofstudy": [ "Computer Science", "Biology" ], "oa_license": "CCBY", "oa_status": "GREEN", "oa_url": "https://doi.org/10.1101/2021.08.20.457045", "pdf_hash": "d52ec369adaeada4c1db4292f35bc24d8a8c75ad", "pdf_src": "ScienceParsePlus", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:153", "s2fieldsofstudy": [ "Political Science", "Education" ], "sha1": "1da76668628274e7c2171a6768ecc8ca4f7e12ad", "year": 2021 }
pes2o/s2orc
Free for all, or free-for-all? A content analysis of Australian university open access policies Authors Recent research demonstrates that Australia lags in providing open access to research outputs. In Australia, while the two major research funding bodies require open access of outputs from projects they fund, these bodies only fund a small proportion of research conducted. The major source of research and experimental development funding in Australian higher education is general university, or institutional, funding, and such funds are not subject to national funder open access policies. Thus, institutional policies and other institutional supports for open access are important in understanding Australia’s OA position . The purpose of this paper is, therefore, to understand the characteristics of Australian institutional open access policies and to explore the extent they represent a coherent and unified approach to delivering and promoting open access in Australia. Open access policies were located using a systematic web search approach and then their contents were analysed. Only half of Australian universities were found to have an open access policy. There was a wide variation in language used, expressed intent of the policy and expectations of researchers. Few policies mention monitoring or compliance and only three mention consequences for non-compliance. While it is understandable that institutions develop their own policies, when language is used which does not reflect national and international understandings, when requirements are not clear and with consequences, policies are unlikely to contribute to understanding of open access, to uptake of the policy, or to ease of transferring understanding and practices between institutions. A more unified approach to open access is recommended. Introduction In the context of scholarly communication, Open Access (OA) refers to the principle that research outputs should be freely and openly available for all users in a form which is "digital, online, free of charge, and free of most copyright and licensing restrictions" (Suber, 2012;p. 14).OA has been shown to have significant and tangible social and economic benefits (Tennant et al., 2016).Ensuring that access to published research outputs is equitable and cost effective is an important challenge for the higher education sector, and for Australian society at large (CAUL & AOASG, 2019). The OA landscape is complex and multi-faceted.As Pinfield et al. have shown (2020), multiple dimensions operating at different levels involve many different actors combining in intricate ways. Advancing OA performance therefore requires the formulation and implementation of a range of strategies and processes carefully designed to address behavioural, technical and cultural issues.To take the example of just one geographic region -Europe -the last 10 years have seen a huge range of projects and policy initiatives designed to promote the uptake of OA publishing rates.The European These national and supranational policies and frameworks are complemented by institutional polices. Recent research has shown that the European higher education sector has a sophisticated approach to institutional policy development and implementation, with 91% of institutions either having or developing an Open Science policy -a notable figure, given that such policies go beyond the boundaries of open access specifically, and embrace the broader concept of open science (Morais et al., 2021).The overall effect has been striking, with European countries in leading positions in rankings of global OA performance (European Commission, n.d.b). In contrast, Australia, once a leader in global OA efforts, is now "lagging behind", and there are urgent calls for open scholarship to be a national priority (CAUL & AOASG, 2019;Foley, 2021).Recent research evaluating international OA performance levels has shown that "universities in Oceania … lag behind comparators in Europe and Latin America" (see Figure 1) (Huang et al., 2020).There is also increasing awareness that policies, and particularly OA mandates, are a crucial means of driving up OA adoption levels (Larivière & Sugimoto, 2018).In contrast to the European examples cited above, there is no overarching open access position in Australia.While major national funders such as the Australian Research Council (ARC) and the National Health and Medical Research Council (NHMRC) have policies requiring research to be made available OA, more than 30% of outputs from such funded projects fail to comply with this mandate (Kirkman & Haddow, 2020), and indeed these funders represent only 14.6% of all higher education R&D funds (Australian Bureau of Statistics, 2020).The major source of research and experimental development funding in Australian higher education organisations is general university, or institutional, funding (56%), and such funds are not subject to national funder OA mandates (Australian Bureau of Statistics, 2020).The Australian Code for the Responsible Conduct of Research requires universities to develop and maintain "policies and mechanisms that guide and foster the responsible publication and dissemination of research" (Australian Government, 2018;p. 2).Since there is also a requirement that "institutions should support researchers to ensure their research outputs are openly accessible in an institutional or other online repository, or on a publisher's website" (p.3), it seems clear that institutional OA policies have an important role to play in driving OA performance.However, regional comparisons published by the Curtin Open Knowledge Initiative (COKI 1 ), which are dynamically updated over time (Figure 1), clearly indicate that institutions in Oceania, where Australia is located, perform relatively poorly in terms of open access 2 .It is possible that one reason is because the policy framework at the institutional level does not encourage or support OA.There have, however, been a number of international attempts to 1 http://openknowledge.community 2 Some of the animated versions of the open access data from Australia and other countries can be seen at the following links: a. EVOLUTION OF AUSTRALIAN OPEN ACCESS LEVELS b. COMPARISON COUNTRIES CATEGORIES OF OPEN ACCESS c. INSTITUTIONS OPEN ACCESS OVER TIME d. COUNTRIES: OPEN ACCESS OVER TIME provide guidance on what policies, including institutional policies, should include in order to support OA (Morrison et al., 2020;Swan et al., 2015).In particular this project is seeking to answer the following research questions: While in • What are the characteristics of Australian institutional OA policies, in terms of content, intent and compliance mechanisms? • To what extent do Australian university OA policies represent a coherent, unified approach to delivering and promoting OA in Australia? Literature review There have been some prior efforts to identify OA policies at Australian Universities.Kingsley (2011) noted that in 2011 seven Australian Universities had OA "mandates", but argued that several of these institutions were in fact "encouraging" rather than mandating OA.A later analysis by the same author found that as of January 2013, nine universities either had, or were in the process of implementing, OA mandates (Kingsley, 2013).Callan (2014) identified 12 institutions with OA policies in her review of the Australian OA landscape. More broadly, a number of previous studies have reported individual institutions' experiences of developing and implementing OA policies, both in Australia and internationally.Cochrane & Callan (2007) describe the development, implementation and impact of an eprint deposit mandate at Queensland University of Technology (QUT), describing collaboration between the University Research Office and Library, and the advocacy work required to obtain buy-in from research staff.Soper (2017) similarly outlines the background to the passing of an OA policy at Florida State University, with a rich description of institutional politics and advocacy efforts, and discussion of processes and systems developed to support its implementation.Kern & Wishnetsky (2014) Beyond the institutional policy setting, earlier work has also focused on the nature and impact of national and funder policies.Crowfoot (2017) analyses the OA policies of the funding bodies who are members of the Science Europe association, noting some variation in the alignment of policies, particularly regarding embargoes, and positions on gold OA options.She also notes potential issues with monitoring and compliance mechanisms.Huang et al. (2020) report a detailed analysis of international OA performance at an institutional level, and link these performance levels to the various policy interventions at a national and funder level.In the UK, Awre et al. (2016) when OA should be delivered, copyright issues), the different actors involved, and the levels of support encompassed by the policies.The authors review institutional, national and funder OA policies affecting Dutch researchers, and note whether these policies meet the various criteria outlined in the framework.The analysis provides important insights into the broader OA landscape, but seeks to illustrate the utility of the framework rather than undertake a granular content-based comparative analysis of the policies under review. Fruin & Sutton (2016) report the results of a questionnaire relating to OA policies at US institutions. The questionnaire explored the rationale, processes and content of the institutions OA policies, and was directed only at institutions which have implemented, were in the process of implementing, or had attempted but failed to implement an OA policy.Of the 51 institutions which responded, 41% identified that their policy required deposit in an institutional repository (IR), while 14% characterised their policy as merely a statement of encouragement to publish OA, and 10% as asking faculty to opt-in to the practice of self-archiving and open access.Most policies were found to incorporate waivers for authors on request, and without any explanation being required.In the "vast majority" of cases, OA policies were found to originate from and be driven and managed by libraries.The questionnaire also sought to understand the arguments used to support the principles of OA, with the most commonly used justifications relating to author rights retention, access for all, and the goal of ensuring that publicly-funded research was publicly available.Perhaps the most significant findings relate to policy waivers and embargoes.The majority of institutions (70%) were found to respect publisher embargoes, with 10% not observing them, and 20% doing so only in certain cases.As the authors note, "the decision to honor these embargoes is typically an element of the implementation of the policy, which is usually led by the library as manager of the institutional repository" (p.481). Similarly, Duranceau & Kriegsman (2015) surveyed Coalition of Open Access Policy Institutions (COAPI) members about their OA policy development and implementation.From their analysis they identify four models of OA policy implementation: systematic recruitment (i.e. the library collecting publication metadata and using this to "request and acquire" publications from faculty), targeted outreach (i.e. the library targeting specific departments of faculties who are seen to be more "receptive" to OA), faculty profiling (i.e.use of a tool or system by which researchers can submit bibliographic metadata and upload outputs), and harvesting (i.e. the copying of research outputs from other repositories or publisher sites).The authors suggest that these approaches are potentially sequential, with institutions potentially moving towards a harvesting model as infrastructure and policy development allows. Examining European OA policy alignment based on an analysis of ROARMAP (an international registry of self-reported OA policies), European funder policies and 365 institutional policies, Hunt & Picarra (2016) found that 63% of institutional policies mandated the deposit of research items, with 96% of policies specifying a repository as the locus of deposit.However, only 38% of institutional policies were found to include a requirement to make that deposit open access.The study also included analysis of deposit waivers, with 18% of institutional policies found to support waivers, and 21% found to not support them, with the remainder of the policies not specifying or not applicable.Examining the timeframe for making research outputs OA, 43% of institutional policies were found not to mention this, while 45% of policies specified either when the publisher permits or by the end of policy-permitted embargo.Only a very small number of policies were found to specify acceptance date (3%), publication date (3%) or as soon as deposit is completed (1%).It is important to note that a key finding of this study was that a significant number of institutional policies "do not specify or mention some essential elements which are critical to promoting a strong, effective policy" (p.4), particularly relating to the time period for making outputs OA, length of embargo periods, 'Gold' OA publishing options, and a clear link to research evaluation (for example a note that non-compliance can impact the research assessment process). From the early days of OA to more recent times the literature asserts how the various actions of government, funders and institutions (such as providing finance and supporting infrastructure, recognising and rewarding, legalising and promoting, setting as setting policy and stating as a goal), are all interrelated and reinforce each other in the uptake of OA.In this paper we analyse the content and intent of Australian institutional OA policies, as a means of better understanding how they might contribute (or otherwise) to the uptake of OA in Australia. Method The study applied a content analysis approach.In order to identify institutions we used the Australian Government's List of Australian Universities 4 that includes 42 universities.We initially consulted the Registry of Open Access Repository Mandates and Policies (ROARMAP) to identify which of the 42 institutions have formal OA policies.However, it became apparent that whilst a useful resource, ROARMAP does not contain up to date information for some institutions.We therefore conducted our own systematic search of institutional websites in order to identify policies.This process first involved searching university websites to locate their policy libraries.Within the policy libraries, further searches were conducted using the following keywords: • "Open access" 4 https://www.studyinaustralia.gov.au/English/Australian-Education/Universities-Higher-Education/list-ofaustralian-universities • "Publication" • "Authorship" • "Research" • "Dissemination" • "Theses" • "Intellectual Property" The policies, procedures or guidelines that were shown as a result of these searches were reviewed, and policies relevant to OA were downloaded and saved.Where institutions' policy libraries did not appear to have any policies, procedures or guidelines that referred to Open Access, and for institutions without a searchable policy library, the university website was searched with the term 'Open Access' to identify any other relevant documentation. The resulting documentation was reviewed, and the 42 institutions were then subjected to an initial classification of policy scope.A key question that informed this analysis related to how to define a "policy".The language used in policy development is very specific.Terms such as policies, procedures, guidelines and others have particular meanings, and these are articulated in some instances (University of Queensland, 2020).National governance frameworks, such as those provided by the Tertiary Education Quality and Standards Agency, also emphasise the importance of formal policies.For the purposes of this study we therefore defined a formal OA policy as a document with the terms "open access" and "policy" in the title, and which was located either in the institution's policy library, or elsewhere on the main university website.A second category consisted of institutions with policies that mention or relate to OA, but as part of a document with a broader scope (e.g."Research publication policy", "Research Repository Policy").The third category consisted of institutions without a formal OA policy, but which provided less formal OA guidelines, principles or procedures document.These documents were sometimes found to be published on LibGuide sites.The final category related to institutions without any policy, formal or informal, relating to OA. Carnegie Mellon University was excluded from the study at this point, as the relevant policies were found to originate from the US parent institution, rather than be developed in Australia.This left a total of 41 universities. This process resulted in identification of 20 (48.8%) universities which have a formal OA policy (Table 1). Eleven (26.8%) institutions were found not to have OA policies, but to have other policies that reference OA, while five (12.2%) universities without formal OA policies instead have other OA-specific documents titled principles, procedures or guidelines.Nine universities (22.0%) were found to have no policies, procedures or guidelines related to OA.For the purposes of this research we focused our analysis on the 20 formal OA policies identified during the initial classification process.Including other types of policy that mention OA, or non-formal OA policy documents, would have made the analysis impractically complex, and would have required comparing documents with very different purposes and scope.All formal OA policies were downloaded between November 2020 and January 2021 and subjected to analysis using a mix of checklist and content analysis.We note here that some institutions may have updated existing policies, or introduced new ones, since the data were collected, and we acknowledge that the results are indicative of the OA landscape at the time the research was conducted.In selecting which aspects of the Australian policies to analyse, ROARMAP data categories were consulted along with some literature (Awre et al., 2016;Bosman et al., 2021).The categories used in the various analyses were different in each context, and used different language to describe similar concepts.To create a list of categories for analysis we borrowed from these previous studies and sites, then combined and/or added categories, labelling them in ways that seemed relevant to the Australian context and where possible using language reflected in the policies.This process involved all the research team and much discussion and refinement of • Intent of the policy Some of the categories listed above were simple fact checking, but most items required more rigorous content analysis and categorisation.The categories (for instance categories for exceptions) were developed inductively by the researchers and were discussed and refined by the research team.The information collected for each policy was checked by at least two researchers to ensure its accuracy.All information and coding was recorded in a spreadsheet that is publicly available here: https://doi.org/10.6084/m9.figshare.15595572.v1. The twenty OA policies were analysed in relation to their statements on paying for publication by creating a table that included any text copied from the policies that related to this issue.The text was then analysed for statements relating to hybrid publishing, any position on green or gold open access, whether funds were mentioned and if so whether any caveats were placed against those funds.We were aware that the absence of a statement doesn't mean endorsement of the opposite position. An analysis of the intent of the policies was more complex.Again each policy was scanned for language that referred to the purpose of the policy, but additionally any information that referenced a 'position' on how to approach open access.Once this text was extracted into a separate table, the text was analysed for any terms that were repeated or distinct.This identified different approaches from institutions such as recommending the use of an author's addendum, or that authors retain the copyright of their work.These were mapped into a table.A related analysis considered the specific language around the 'benefit' of the policy which fell into clear categories including the 'benefit to society', 'increasing access to research' and 'maximising the research impact'.These too were mapped into a table and are reported below. Number and date of policies As noted above, 20 out of 40 Australian universities were found to have active OA policies.This represents an increase on the 11 institutions found to have policies in 2013 (Kingsley, 2013).In our study, where the date of the first version of these policies was reported (19), the oldest policy was from 2003, and the second oldest was created in 2008.The two years of 2013 and 2014 were clearly a key time for policy creation as 10 policies were implemented during this period.Three of the policies were created in 2020.The median age of the policies was seven years (created in 2014).However, we must acknowledge the possibility that some institutions may have had older policies that were subsequently superseded, and so were not included in our analysis.Analysis of the Date of next scheduled review of the policy data shows that of the 19 policies with a stated review date, ten show a historic date, with eight of these being pre-2020. To whom the policy applies Variation was found among OA policies in terms of the statement defining the people to whom the policies apply.In some cases definitions were detailed and granular: "This policy applies to all staff (including conjoint and adjunct staff) and students undertaking research at UNSW, either full-time or part-time and applies to scholarly research outputs" (UNSW).Others were much more general: "This Policy applies across the University" (ANU).Statements regarding to whom the policy applies were analysed, and categorized into four groups: staff, students, affiliates, and contractors.The findings have been summarized in Table 2, which shows that all OA policies were found to apply to staff, and the vast majority to students and affiliates.sponsor; Reference Authority and Responsible officer.Each policy was found to mention at least one of these terms.In some policies, a number of different roles were mentioned.For example, the Australian Catholic University policy names an Approval authority, Governing Authority, and Responsible Officer. In 13 policies (65%) a position in the library was identified as responsible for the policy.Various positions have been allocated in charge of policies.In most cases the Library Director or University Librarian was named.Eight policies identified a non-Library contact as the person responsible for the policy, these most often being pro or deputy vice-chancellors.These results are broadly consistent with Fruin & Sutton's findings relating to US OA policies (2016), which found that OA policies were library led in the "vast majority" of cases. Role of the library in OA policies Five OA policies (25%) did not mention the library.For the remaining 15, the role of the library was most often related to the institutional repository, delivering assistance or advice, and supporting copyright compliance (Table 3).It is important to note here that Fruin & Sutton's study of US institutional OA policies found that while university libraries typically played a major role in OA policy development, implementation and support, these roles were often not articulated in the policies themselves.It is therefore possible that our results are not truly representative of the library involvement with OA activities. Mentions of funders in OA policies References to funders were found in all 20 policies.Analysis of these references, however, reveals significant differences in the relationships between funder and institutional OA policies (Table 4).Policy states that it will facilitate reporting on ARC/NHMRC policy compliance 1 (5%) While one policy mentions funders only in the context of the policy being a tool to facilitate reporting and monitoring of compliance with funder policies, all the other policies incorporate funder OA policies is a substantial way.Funders were found to be most often mentioned in the context of institutions stating that their policies comply with funder requirements.The implication here is that institutional policies have been designed to ensure alignment with minimum funder requirements, such that researchers acting in accordance with the institutional policy will automatically comply with ARC and NHRMC policies.In addition, two policies -those of QUT and the University of Queensland -while not specifically mentioning compliance, explicitly state that they are based on the requirements of ARC/NHMRC policies.In contrast, five policies include specific requirements for authors to comply with national funder OA policies, in addition to the requirements outlined elsewhere in the policies, implying that adhering to the standard policy requirements alone would not ensure compliance with funder policies.In the remaining four cases, policies are less clear about the relationship between institutional and funder policies, stating only that they "support" funder policies, and including no specific requirements to adhere to funder policies. As Definitions As might be expected in a formal policy document, a very high proportion of university OA policies (90%) were found to include a definition of OA.It was expected that policy definitions would reflect commonly understood definitions of OA.While the majority of policies had a definition of OA very few of the definitions were the same.Definitions were shared only in two cases, each by two universities.It would be interesting to know if there was collaboration on policy development in these cases. Shared understandings are more likely to make it easier to implement OA within and between universities, at national and international levels, and to build community acceptance and understanding of OA models.Faced with the challenge of drafting a definition of OA for inclusion in an OA policy, we might expect authors to borrow from definitions found in well-known OA initiatives or related policies, , 2017).While it is understandable that definitions are simplified for accessibility of understanding, definitions which do not reflect national and international understandings or miss important aspects of open access are unlikely to contribute to understanding of OA, to uptake of the policy, or to ease of transferring understanding and practices between institutions. It is also important to note Morrison et al.'s (2020) recommendation that successful OA policies should use standardized and consistent language both within and across universities. Exceptions All 20 OA policies were found to specify various exceptions to the standard requirement that work be made available OA (Table 5).It should be noted here that the category creation was driven by the language used in the policy.It could be argued, for example, that Publisher agreement and Copyright or licensing restrictions are very closely related, but it was thought important to note the difference in language used in the policies.In practice this distinction typically referred to the difference between publisher embargo periods and other restrictions.For example: "Where deposit of the full-text material, or dataset, is not possible due to publisher embargo, or is not permissible due to copyright or licensing restrictions" (Bond University) Other exceptions included concerns related to commercial or cultural sensitivity, and confidentiality, although less than half of all OA policies specified these exceptions. Language used to describe the policy directive Our content analysis also included identification of the language used in association with the OA directive.Table 6 presents the most commonly found words used in association with the instructions to researchers, along with examples of the terms in context.In many cases the language is strong ("must", "required"), implying a mandate even if this particular word is not included.We also note a distinction between whether policies use an active ("researchers will …") or passive (research outputs must be made…") form.Once again, the language used to describe the policy is varied, in contrast to Morrison's recommendation (2020) that policy language should be standardised across institutions.One might also question the extent to which the language used constitutes a "mandate", as recommended by Swan et al. (2015).Notably only one policy was found to use the word "mandate".Compliance/consequences of breach of policy Taken in isolation, the language associated with OA directives, as described above, might be considered relatively strong.It is important, however, to consider this language not only in terms of the associated exceptions to the policy, but also to the compliance and monitoring mechanisms associated with the policy, and the stated consequences of a failure to comply.We found that OA policies typically do not explain the monitoring processes.This is perhaps not surprising, as a detailed outline of such activity might reasonably be considered to be beyond the scope of a formal policy document.It is also important to recognise that the requirement to comply with University policies, and consequences of non-compliance, may be outlined in other formal documents (such as employment contracts and codes of conduct).It is notable, however, that only three of the twenty OA policies (15%) specifically state the consequences of a failure to comply.We note that in all three cases, the impact of the statement is softened by the inclusion of the word "may": • "The University may commence applicable disciplinary procedures if a person to whom this policy applies breaches this policy (or any of its related procedures)" (Macquarie University) • "Non-compliance with this Policy may constitute research misconduct and/or general misconduct, which will be addressed in accordance with the University's Enterprise Agreement and relevant disciplinary procedures."(University of Adelaide) • "Breaches of this Policy may result in action being taken in accordance with the University Code of Conduct for Research."(UNE) That so few policies were found to explicitly discuss compliance monitoring is consistent with the findings of Hunt & Picarra (2016), who argued that "a significant number" of OA policies omit elements, including monitoring, and links to research assessment activities, that are "critical to promoting strong, effective policy".Crowfoot (2017) Payment for publication and hybrid journals We examined the specific statements used within policies that relate to paying for OA publication and these are reported in Figure 2. Detail of the positions taken on the payment of article processing charges (APCs) in subscription journals in particular (known as hybrid OA) are relevant, as charging authors an APC and readers a subscription for the same article has led to accusations of double dipping by these publishers (Phillips, 2020).A 2016 analysis of requirements of UK funders, and of US and UK library-run funds found there was a wide variety in the way the expectations around hybrid was expressed (Kingsley & Boyes, 2016).Given the Australian investment has historically focused on green open access (putting a copy of the work into repository), the assumption could be made that hybrid would be at odds with this strategy in Australia.Neither Bond University nor the University of Wollongong mention hybrid at all, but have opposing positions on paying for publication.Bond makes no restrictions or suggestions on paying for publication: "Use of external grant funding or discretionary University, Faculty, or Research Centre funding may be provided to cover the publishing costs i.e.Article Processing Charges, of accepted open access publications".Wollongong on the other hand states: "The University maintains a position to not pay for the publishing of online research where possible."However, the policy goes on to say: "Gold Open Access may be supported and funded at the faculty level where strategically or otherwise appropriate." It does state: "The University supports a Green approach to Open Access" which could possibly be interpreted as being anti-hybrid, but it is not specific. There are implied sanctions on hybrid in several policies that explicitly support 'Gold Open Access Journals' and which do not specifically mention hybrid.For example, Charles Darwin University uses the expression: "In a journal that is deemed to be Gold Open Access".Central Queensland University says: "the publishing outlet is considered to be Gold Open Access" and the University of New England suggests researchers should seek funding: "if wishing to publish in an open-access journal". Perhaps more than any other element of our analysis, the findings relating to positions on paying for publication illustrate the lack of a consensus vision for OA in Australia.Given Morrison's persuasive arguments for consistency and standardisation across institutions (2020), and the complexities associated with different visions for OA, this represents a significant issue for the delivery of OA in Australia. Timing of deposit There is evidence to demonstrate that the timing of a policy can make a difference to the compliance rate (Herrmannova et al., 2019).Only 13 of the 20 Open Access policies specify a timeframe within the policy, and there were found to be significant variations within these policies.The University of Melbourne and the University of Wollongong were found to specify the period of time within which the work should be deposited into the repository (within three months of publication and at the time of publication respectively).Australian National University has different requirements for deposit to the repository depending on the item in question, but for journal and conference publications, technical reports and other original, substantial works, the timing is: "within 3 months or as promptly as possible after publication".universities specify deposit of work into the repository "upon" or "after" acceptance for publication: Edith Cowan University, Macquarie University, Queensland University of Technology and Australian Catholic University.These are interesting because managing embargoes can be challenging if a work is deposited prior to publication.Publisher embargoes relate to a period of time after publication, so institutions requiring deposit prior to publication must have systems in place to revisit these works at the time of publication to set the embargo.As Larivière and Sugimoto (2018) have noted, it is essential that OA policies are supported by proper technical infrastructure and process design, for example, to switch embargoed deposits to open once embargo periods are over. This distinction between when something is deposited and when it is made openly available means in some instances a deposited item might not be made openly available for up to 24 months after deposit. For this reason, it is significant if a policy does stipulate when a work should be openly available, as distinct from simply when a work should be deposited.One example is the University of Adelaide which states: "Researchers are encouraged to avoid embargoes of greater than 12 months from date of publication.Where agreements do not allow outputs to be made Open Access within 12 months researchers should make reasonable attempts to negotiate this provision with the publisher."University of South Australia aspires for deposit and open access with the text: "as soon as is practicable and not later than twelve months after publication".The University of Sydney states "no later than 12 months after the date of publication".Both University of New South Wales and the University of Queensland identify making work openly accessible "within twelve months of publication" but both have the caveat "or as soon as possible".Western Sydney University is even more passive on pushing back on embargos, simply stating "as soon as possible". This inconsistency and lack of clarity about what is actually required in terms of deposit and openly accessible timeframes is a significant issue in relation to having a unified position and policy across the country.The variations in Australian institutional policies are consistent with findings from other studies, including Hunt and Picarra (2016), who found that 43% of European institutional policies failed to mention the timing of deposit, with most of the remainder using vague terms such as "when the publisher permits". Intent of policies Considering the text used within a policy that refers to the intent or rationale behind the policy, a series of perspectives arise.By considering the different policy statements it is possible to identify if a policy supports particular positions on the question of open access.As is identified in Table 7, different policies make a statement about whether the university supports green or gold OA, and whether they have a position on hybrid journals.Some articulate a requirement about the retention of intellectual property and some offer options for managing this through an addendum.Some policies define the support the university will be providing and others are clear about the accountability of the effectiveness of the policy.When considering the motivation behind the policies, we identify three main categories: to increase the profile of the research of the university, to ensure the university research has a wide audience, or because open research benefits society.As with other aspects of our analysis, there is a clear lack of consistency in terms of the rationale used, with no single argument being adopted by more than half of the studied universities.Nine universities have a statement of intent within the policy which refers to a wider benefit beyond the institution.However, these vary considerably.The Australian Catholic University policy stands out because it is widely encompassing and quite specific in its stated purpose, referencing the Australian Deputy Vice Chancellors (Research) Committee mandated Findable, Accessible, Interoperable and Reusable (FAIR) Statement; intellectual, social, economic, cultural and environmental benefits; excellence, impact and engagement in research practice, including reproducibility and approaches that encourage collaboration and the transfer of knowledge between researchers and users of research in industry, government and the general community. The remaining policies that are classified within the 'open research benefits society' vary in scope and aim.Some policies specifically refer to benefits to society, for example the University of Queensland policy aims to "ensure the results of research are made available to the public, industry and researchers worldwide, for the benefit of society" and the University of New South Wales states that: "Open Access publication enables us to share our capability in research and education effectively and equitably with global partners and stakeholders ... Open Access supports the generation of new knowledge applied to solve complex problems, deliver social benefits and drive economic prosperity, locally, nationally and globally".Others are more tangential in their reference to benefits, such as James Cook University which notes the policy is "recognising that knowledge has the power to change lives". Others are more focused on exchange of information with the public.For example, the University of Discussion Prevalence and strength of Australian institutional OA policies The Australian Code for the Responsible Conduct of Research devolves responsibility to institutions to develop and maintain OA policies (Australian Government, 2018), but the results this study demonstrates that only 20 (50%) of universities have a formal OA policy.This finding is particularly concerning given the wealth of evidence from around the world that confirms the positive impact of strong and consistent OA policies (Herrmannova et al., 2019;Huang et al., 2020;Larivière & Sugimoto, 2018;Rieck, 2019;Robinson-Garcia et al., 2020). Funder mandates, institutional policies, grass-roots advocacy, and changing attitudes in the research community have contributed to the considerable growth in open access publishing during the last two decades (Huang, et al., 2020).While having an open access policy in place is important, the literature demonstrates that a policy alone is not enough to ensure the OA of research outputs.Clear language and processes for the enforcement of OA in policies is required to stimulate significant growth of OA. Our analysis found that none of the 20 existing Australian university OA policies mentioned monitoring of compliance, and only three specified any consequences for a failure to comply.While there may well be monitoring activities taking place at many Australian universities, it is clear that these are not widely publicised, and certainly not codified in policy documents.However, it has been shown that there is a clear link between compliance rates and clearly stated consequences for non-compliance (Larivière & Sugimoto, 2018).To give one example, the National Institutes of Health introduced an open access policy in 2005, requesting that funded authors deposit a copy of their publication in PubMed Central (Suber, 2008).In November 2005 it was reported that fewer than 3% of publications were being deposited and recommended that the policy become a requirement (NIH Public Access Working Group, 2005).A new policy was written, mandating deposit, applicable from April 2008 (National Institutes of Health, 2008).By the end of that year compliance was almost 50% (Poynder, 2009).In 2013, the NIH strengthened the policy further, in that it would: "delay processing of non-competing continuation grant awards if publications arising from that award are not in compliance with the NIH public access policy" (National Institutes of Health, 2012).This resulted in a 'surge' of deposits of papers, both current and retrospective (Van Noorden, 2013).Similarly, the Wellcome Trust has also strengthened their open access policy over time.Their policy was introduced in 2005 but compliance was only at 15% in 2007 (Wellcome Trust, 2014).In June 2012, when compliance was at 55%, the policy changed, requiring grant recipients to demonstrate compliance otherwise "the final payment on the grant will be withheld" and new grants would not be awarded (Wellcome Trust, 2012).Following the stronger requirement, by 2019 compliance was up to 95% (Wellcome Trust, 2019).This evidence should provide strong incentive to policy makers at all levels, including Australian Universities, to ensure that OA policies include meaningful consequences for compliance failures. Standardisation, consistency, and aligned intent Our analysis clearly demonstrates enormous variation across OA policies.We found significant differences in the intent of policies; the definitions of OA; the arguments used to support it; requirements for the timing of OA deposits; positions on paying for publication; the language used to describe researcher responsibilities; the exceptions to OA requirements, and the role of libraries in both policy development and compliance and monitoring.While we recognise that institutions are independent entities with their own goals and priorities, and therefore some variation is perhaps the overall picture is one of confusion and inconsistency.This is especially troubling given that Australian researchers do not work in institutional vacuums.Cross-institutional collaboration is in Australia (Luo et al., 2018), as is researcher mobility, with most academics working at multiple universities over the course of their careers (Bexley et al., 2011). Our analysis demonstrating the wide variety of positions on, and language about, OA in Australian institutional policies is likely to suggest to researchers that OA is a fractured concept.This may be a result of OA policies being developed and owned by a range of roles and institutional areas which may have different priorities and account for some of the lack of standardisation in Australian university OA policies.While evidence has repeatedly shown that clear, consistent national or supranational positions on OA are the most effective way of maximising OA performance, our study suggests that Australian universities are a long way from achieving this.In the absence of clear national leadership it is vital that HE institutions themselves work collaboratively to build consensus around effective OA policy development.Further research in this area, in order to better understand the key stakeholders and organisational processes at play, and identify the appropriate mechanisms for collaboration, would undoubtedly be beneficial. The stated intention of the policies analysed in this study differ markedly, ranging from increasing the impact of the institution's research outputs, to increasing the profile of the institution, to improving Clarity on APCs Our analysis of the varied positions on paying for publication, whether as gold or hybrid, indicates that Australia is very far from putting forward a unified position on APCs.As well as the general and overarching point that this lack of consistency is confusing for researchers, this is particularly significant given the recent shift to what are becoming known as 'transformative' deals between academic libraries and publishers.These incorporate the costs of publication as well as the costs of subscriptions with a general aim of reducing overall costs or at least remaining cost neutral (Hinchcliff, 2019).In order to assess the value of a proposed deal to an institution, there is a pressing need to understand the institutional expenditure on APCs.In Australia, where APCs are mostly paid by individual grant holders and not centrally managed, it is difficult to determine the level of expenditure on them.Attempts to identify this figure date as far back as 2014 (Kingsley, 2014). In Australia, group negotiations on content procurement are managed by the Council of Australian University Librarians (CAUL) through a CAUL consortium.In October 2019, CAUL announced the first transformative agreement for Australia and New Zealand with the UK-based Microbiology Society, which provides the university libraries the ability to pay a single "publish and read" fee for uncapped open access publishing in all of the Society's journals by corresponding authors (CAUL, 2019). During 2019 CAUL also commenced a project to design and implement a consistent process for collection and reporting of APCs (Cramond et al., 2019).As transformative deals become more common in the Australian landscape, the need to have clearer policies relating to APCs and a better understanding of APC expenditure at an institutional and national level becomes more urgent.This in turn requires institutions to develop consistent policies on funding APCs, and clearer guidance for researchers on their use. Timing of deposits Only 13 of the 20 institutional OA policies were found to specify a deadline for deposit of papers into a repository.In many of those 13 we found inconsistency, and a conflation between instructions to 'deposit work' and 'make the work openly accessible' in the language of the policies.'Deposit' and 'make open' are different actions and clear differentiation of the two in any OA policy would assist researchers.For example, if a policy states that a work must be deposited on publication, yet there is a publisher embargo on open OA, then the work must initially not be OA and only be available as a metadata-only record in the repository.The full work may only be made openly accessible once the embargo period is complete.It is essential that those responsible or drafting policy understand the challenges for researchers associated with understanding this complex space, and draft policy accordingly.As there is a positive correlation between requiring deposit closer to acceptance (rather than on or after publication) (Larivière & Sugimoto, 2018),compliance policies should ideally both stipulate when a work needs to be deposited into a repository, and when the work needs to be made OA.Thus there are significant operational implications of not clarifying this differentiation, and not noting when a deposited item is subject to a publisher embargo. Conclusion In this article we have reported in detail the results of a content analysis of formal institutional open access policies at Australian universities.Just 20 of 40 Australian universities were found to have such a policy, despite the Australian Code for the Responsible Conduct of Research requiring universities to publish "policies and mechanisms that guide and foster the responsible publication and dissemination of research" (Australian Government, 2018).Within the 20 analysed policies we found extensive variation across a number of crucial areas, including paying for publication, deposit timing, and the intent and rationale underpinning the policies.In addition, we found only three policies which explicitly stated the consequences for non-compliance. There is growing impetus towards the development of a national OA strategy in Australia.The new Australian Chief Scientist, Dr Cathy Foley, has indicated her support for a unified approach, a move welcomed by advocacy groups (CAUL & AOASG, 2021).Our findings show just how vital consensus building and standardisation will be as part of this process.We suggest that there is an urgent need for Commission published their Guidelines on Open Access to Scientific Publications and Research Data in Horizon 2020 in 2016 (European Commission, 2016), and the associated Horizon 2020 programme represented a EUR30 billion investment in research and innovation between 2018-2020, under which "each beneficiary must ensure open access to all peer-reviewed scientific publications relating to its results" (European Commission, n.d.a).More recently, of course, Plan S has been launched (cOAlition S, 2021), a far reaching and somewhat controversial initiative designed to ensure that the outputs of publicly funded research be made available in Open Access form.Individual European countries are also increasingly developing national approaches to open access and, more broadly, open research.Recent years have seen many countries propose national policies and strategies related to OA, for example Sweden (Swedish Research Council, 2015), Ireland (National Open Research Forum, 2019), and Finland (National Steering Group on Open Science and Research, 2020).In the UK the Research Councils UK open access policy (UK Research and Innovation, 2021), which took effect from 1 April 2013, and the Higher Education Funding Council for England (HEFCE) policy relating to articles submitted to the Research Excellence Framework (HEFCE, 2019), combined with the existing Wellcome Trust open access policy (Wellcome Trust, n.d.) has resulted in a significant shift to open access. Figure 1 : Figure 1: Comparing the level of gold and repository-mediated open access of individual universitieshttps://storage.googleapis.com/oaspa_talk_files/country_scatter.html Australia the responsibility for managing open access policies is placed with institutions, to the best of the authors' knowledge to date there has not been a detailed analysis of open access policies in Australian universities.This study aims to provide a content analysis of the open access policy landscape in Australia, considering key aspects of open access policies, including the means by which open accessis achieved, the timing of the deposit of work into a repository and whether this differs to the timing of when it is required to be made openly available, the provision or otherwise of funds to support open access publishing (and whether there are restrictions around these funds), and other aspects of the policies.The stated intention and purpose of the policies is also a question of interest. highlight the role of the library in the development of an OA policy at Allegheny College, noting that an holistic effort was required encompassing advocacy and systems development, and emphasising the importance of institutional buy-in.Orzech & Meyers (2020) also outline the background to enacting an OA policy at several campuses of the State University of New York.They also highlight several key characteristics of institutional OA policies, including either mandatory or voluntary deposit, and opt-in or opt-out approaches.Otto & Mullen (2019) focus on the impact of the implementation of an OA policy at Rutgers, describing a significant increase in self-depositing practices while noting some confusion among researchers about which version of a manuscript to deposit.They also address systems, processes and strategies developed to encourage and facilitate self-archiving.Hoops & McLaughlin (2020) likewise address the systems side of OA policy implementation, describing the development process and functionality of an in-house application designed to support researcher depositing, while Kipphut-Smith (2014) presents the workflows developed to support the OA policy at Rice University.Saarti et al. (2020) report a survey of researchers undertaken to gauge attitudes and perceptions of open scholarly practices in the context of developing and implementing open science policies at the University of Eastern Finland.They note that a wider culture of open scholarship is only at an early stage, with associated challenges for instigating change through policy. categories.For example, for what we categorise as output types included in the policy, ROARMAP refers to as content types specified,Bosman et al. (2021) refer to as What is made open access, and Awre et al. (2016) as A description of the type of research output which the policy covers.Following this process each document was examined for information in the following categories: • Date of first version of the policy • Date of most recent version of the policy • Date of next scheduled review of the policy • Responsible office/Policy owner • Definition of OA • Whether the policy defines of OA • Whether the policy defines the content to which the policy applies • Output types included in the policy • Timing for depositing outputs • People to whom the policy applies • Role of the library • The language used to describe responsibilities • Exceptions to the policy • Consequence for non-compliance • If and how FAIR, ARC and NHMRC are mentioned in the policy • If and how research data is mentioned in the policy • Type of OA (green, gold) covered by the policy • The type of information on copyright included in the policy • If and how funding for gold OA and/or paying APCs is covered in the policy such as the Budapest Open Access Initiative (BOAI)(2002), Berlin Declaration on Open Access to Knowledge in the Sciences and Humanities (Berlin)(2003), orUNESCO (n.d.).Given the Australian context, we might also expect to see text based on the AOASG (2019 and based on the Budapest and Berlin declarations) or even the ARC (2017) and NHMRC (2018) definitions.Searches in both Google and Google Scholar revealed that most definitions covered aspects from the above definitions, but only two referenced the sources of their definitions, one of these referencing AOASG (the Australian Catholic University) and one BOAI (University of New South Wales).The ARC/NHMRC definition covers reuse, licensing and attribution which are key concepts in understanding open access; however, most definitions used simplified language and some focused only on access, for example: "Open Access means immediate, permanent, unrestricted, free online access to the full-text of refereed research publications."(University of New England and James Cook University) "Open access: Allowing research outputs to be freely accessible to the general public;" One made the definition local to their own organisation: "Open access means free and unrestricted (electronic) access to [Institution] conducted research, articles and other scholarly outputs."(La Trobe University) Another made the definition only relate to green open access in a repository and did not reflect other OA options: "Open Access means permanent, free online access to research and scholarly publications through a central repository on the public internet" (Southern Cross University) These simplified definitions did not refer, for example, to reuse, licensing and attribution which are key concepts in understanding open access, misunderstandings of which may affect the likelihood of researchers to make their work open access (Zhu Figure 2 : Figure 2: Analysis of the nine Australian university OA policies which mention paying for publication Sydney states the policy "supports the University's core values of engaged inquiry and mutual accountability by encouraging transparency, accountability, openness and the sharing scholarly works and other research outputs with the research community and the public".Both Macquarie University and the Australian National University refer to the open exchange of information as a "bedrock academic value".Despite the general philosophy that publicly funded research should be publicly available, only Edith Cowan University and the University of New England specifically refer to "publiclyfunded research".There are some significant differences between our findings, and those that emerged from Fruin & Sutton's (2016) survey of US librarians.They found that author rights retention and the ensuring public access to publicly funded research were among the most commonly used arguments to support the principles of OA.Neither of these arguments featured in our analysis of Australian OA policies.It is worth noting, however, that Fruin & Sutton's question related to the arguments used "in conversations with faculty and other constituents about why open access is important", rather than the arguments used in the policies themselves.The different arguments used in these different contexts are potentially revealing, and perhaps merits further research. society.None of these rationales are problematic in themselves, however the disparity across the policies indicates a lack of shared purpose.The policies serve different purposes within institutions.It is interesting that of all of the policies, only two mention the position that 'publicly funded research should be publicly available', which has been a longstanding justification for open access.For Australia to have a position on open access, reflecting activity in Europe, it will be necessary to come to an agreed position on why open access is needed. Table 1 : The status of Open Access policy implementation in Australian universities Table 2 : To whom does the policy apply As might be expected, each OA policy was found to name a person or persons with responsibility for the policy.There was a surprising lack of consistency in the terminology used to describe these individuals and groups, reflective perhaps of different governance and organisational structures and associated nomenclature across Australian universities.Common titles were Accountable Officer; Administrator; Approval authority; Contact officer; Governing authority; Policy Custodian; Policy owner; Policy Table 3 : The role of libraries in OA policies Table 4 : Mentions of funders in OA policies noted above, the major Australian national research funders (ARC and NHMRC) represent only 14.6% of all Higher Education Research & Development funds (Australian Bureau of Statistics, 2020). Table 5 : Swan et al.'s (2015)andard requirements in OA policiesIt is instructive to consider these findings in light of the findings of previous studies.Fruin & Sutton's study ofUS OA policies (2016)found that just 10% of institutional policies specified not observing publisher embargoes, with most policies incorporating waivers for authors, both results broadly consistent with our findings.Similar results were also found inHunt & Picarra's study (2016), with 21% of institutional policies not supporting any waivers to OA deposit.It seems clear that there is a general trend for institutional OA policies to explicitly respect publishers' positions on OA, thus not followingSwan et al.'s (2015)recommendation that such policies should not allow waivers. Table 6 : Language and action verbs used in association with OA directives 3 (15%) "The University requires all staff and students to deposit Research Outputs encourages 1 (5%) "The University encourages staff and students to …" Larivière & Sugimoto (2018)ompliance and monitoring in the context of funder policies, whileLarivière & Sugimoto (2018)have argued the best policies are those which are effectively enforced.It is likely that compliance monitoring activities (of varying types) are undertaken at many institutions, without being specified in policies.Nonetheless we argue that specifying compliance activities, and consequences for breaches of policy, would strengthen OA policies. Table 7 : Rationale used in policies in support of OA as a principle
v3-fos-license
2016-05-12T22:15:10.714Z
2012-11-27T00:00:00.000
14967627
{ "extfieldsofstudy": [ "Biology", "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0050134&type=printable", "pdf_hash": "7b5c7a4bb37457509c14a4d95935b5497c5ca08d", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:154", "s2fieldsofstudy": [ "Biology", "Psychology" ], "sha1": "7b5c7a4bb37457509c14a4d95935b5497c5ca08d", "year": 2012 }
pes2o/s2orc
Ataxin-2-Like Is a Regulator of Stress Granules and Processing Bodies Paralogs for several proteins implicated in neurodegenerative disorders have been identified and explored to further facilitate the identification of molecular mechanisms contributing to disease pathogenesis. For the disease-causing protein in spinocerebellar ataxia type 2, ataxin-2, a paralog of unknown function, termed ataxin-2-like, has been described. We discovered that ataxin-2-like associates with known interaction partners of ataxin-2, the RNA helicase DDX6 and the poly(A)-binding protein, and with ataxin-2 itself. Furthermore, we found that ataxin-2-like is a component of stress granules. Interestingly, sole ataxin-2-like overexpression led to the induction of stress granules, while a reduction of stress granules was detected in case of a low ataxin-2-like level. Finally, we observed that overexpression of ataxin-2-like as well as its reduction has an impact on the presence of microscopically visible processing bodies. Thus, our results imply a functional overlap between ataxin-2-like and ataxin-2, and further indicate a role for ataxin-2-like in the regulation of stress granules and processing bodies. Introduction Late-onset neurodegenerative disorders have been intensively studied over the last two decades. However, the molecular mechanisms responsible for their pathologies remain to be elucidated. Of note, some knowledge was gained by exploring the physiological function of paralogous proteins identified for several disease proteins. Regarding the family of polyglutamine disorders, which includes Huntington's disease, spinobulbar muscular atrophy, dentatorubral pallidoluysian atrophy, and spinocerebellar ataxia (SCA) type 1, 2, 3, 6, 7 & 17 [1,2,3,4], a gene duplication of ataxin-1-like (ATXN1L)/Brother of ataxin-1 (Boat), the respective paralog of the disease-causing protein ataxin-1 (ATXN1), ameliorated the observed neurotoxicity in a SCA1 mouse model, indicating overlapping functionality between paralog and disease protein [5]. The search for the gene causing SCA2 led to the isolation of the SCA2 gene [6,7,8], which comprises an intrinsic CAG repeat that is interrupted by 1-3 CAA triplets in healthy individuals, while a continuous CAG repeat of more than 34 repeats has been observed in affected individuals [6,8,9]. The expansion on the genetic level is causal for an extended polyglutamine domain in the SCA2 gene product ataxin-2 (ATXN2). Interestingly, these efforts also resulted in the isolation of a partial cDNA sequence on chromosome 16 that showed high homology to the SCA2 gene sequence [7]. The encoded protein showed high homology with ATXN2 and was therefore named ataxin-2-related protein (A2RP) [10]. Independently from these studies, Meunier and colleagues reported the identification of a gene at the same chromosome locus and named the respective gene product ataxin-2 domain protein (A2D) [11]. Proteins of the A2RP or A2D family, which we refer to as ataxin-2-like (ATXN2L), are widely expressed in human tissues and orthologs are present in other species [10]. Comparison of the derived amino acid sequences of ATXN2 and ATXN2L showed that several motifs are conserved such as the N-terminal acidic domain containing the mRNA-binding motifs Sm1 and Sm2, putative caspase-3 cleavage sites, a clathrin-mediated trans-Golgi signal, and an endoplasmic reticulum exit signal. Furthermore, both proteins comprise the binding motif for the PABC domain of the poly(A)-binding protein (PABP), termed PAM2 [10,12]. Despite these shared motifs, the polyglutamine tract is not conserved between ATXN2 and ATXN2L [10]. Considering the high degree of structural similarity between ATXN2 and ATXN2L, a functional overlap between these paralogs is likely. Regarding the cellular function of ATXN2L, which remains to be understood, an association with the erythropoietin receptor has been reported suggesting a function in cytokine signaling [11]. To this point, a role of ATXN2 in endocytic processes and RNA-processing pathways was demonstrated [13,14,15,16,17]. Concerning its function in the cellular RNA metabolism, ATXN2 is found in association with PABP, further being a dosage-dependent regulator of this protein [14,15]. Moreover, direct interactions of ATXN2 with RNA splicing factors and RNA-binding proteins have been described [18,19]. Finally, an association of ATXN2 with polyribosomes and direct binding of ATXN2 to RNA was demonstrated [20], and ATXN2 has been identified as a component of stress granules (SGs) [14,15]. These are dynamic cellular structures assembling in mammalian cells in response to diverse cellular stresses representing sites of mRNA regulation. SGs contain untranslated mRNAs, eukaryotic initiation factors, small ribosomal subunits, various RNA-binding proteins, and proteins implicated in cell signaling [21,22,23]. Furthermore, there is a dynamic interplay between SGs and processing bodies (Pbodies), sites of mRNA degradation that comprise components of microRNA or RNAi pathways as well as the nonsensemediated mRNA decay pathway [24]. In this study, we considered a potential functional overlap between ATXN2L and ATXN2 with regard to RNA metabolism. We discovered that ATXN2L associates with known ATXN2 interaction partners such as the RNA helicase DDX6 and PABP, and with ATXN2 itself. Furthermore, we observed that ATXN2L is a bona fide component of SGs. Finally, we report that ATXN2L overexpression as well as its reduction has a regulatory effect on SGs and P-bodies. ATXN2L Associates with PABP, DDX6, and ATXN2 Since ATXN2L and ATXN2 share most functional motifs and domains as outlined in Fig. 1A [10], both proteins likely act in related pathways. Our earlier work demonstrated that ATXN2 interacts with PABP and with the RNA helicase DDX6/Rck [14,15]. In this regard, the PAM2 motif, a binding motif found in proteins interacting with the PABC domain of PABP [12], is also present in ATXN2L. Moreover, the acidic domain, which comprises the LSm domain and the LSm-associated domain (LSmAD), represents the interaction surface between ATXN2 and DDX6, and is 69% identical (80% sequence similarity) in both proteins [10,15,25]. Accordingly, we set out to analyze whether ATXN2L is also found in association with PABP and DDX6. For this, we performed co-immunoprecipitation experiments using cell lysates prepared from HEK293T and HeLa cells. After incubation of cell lysates with an antibody directed against ATXN2L, precipitated proteins were analyzed by immunoblotting. As shown in Fig. 1B, we were able to precipitate endogenous PABP (left panel) and DDX6 (middle panel) with an ATXN2L-specific antibody from both cell lysates. Due to this finding, we additionally investigated whether ATXN2L and ATXN2 are found in association as well. Again, cell lysates were prepared from these two cell lines and processed as described. We observed that endogenous ATXN2 was precipitated with the ATXN2L-specific antibody as well (Fig. 1B, right panel). In addition, we included the neuroblastoma cell line SH-SY5Y used previously [15] and observed same results (Fig. S1A). Thus, ATXN2L is found in a complex comprising PABP, DDX6, and ATXN2 in mammalian cells indicating that ATXN2L is involved in cellular RNA processing pathways as well. Next we investigated the intracellular localization of endogenous ATXN2L by confocal microscopy as described in Materials and Methods. This analysis revealed that ATXN2L is primarily cytoplasmic but also present in nuclear structures that co-localize with SR proteins (Fig. 1C, upper panel). These represent markers of nuclear splicing speckles [26], which belong to a family of splicing regulators with a characteristic domain rich in arginine and serine residues [27]. Thus, ATXN2L is a component of splicing speckles. Since a nuclear localization was also reported for ATXN2 [28], we further analyzed whether endogenous ATXN2 is part of these structures as well. As shown in Fig. 1C (lower panel), nuclear ATXN2 did not co-localize with SR-positive structures in HeLa cells under the chosen conditions. Thus, our findings indicate that ATXN2L is associated with the nuclear splicing machinery. ATXN2L is a Component of SGs Since ATXN2, DDX6, and PABP are known components of SGs [14,15,29,30], we investigated in the next step whether endogenous ATXN2L is part of these cellular structures as well. To induce the formation of SGs, HeLa cells were first treated with sodium arsenite or heat-shocked. Then, cells were fixed and stained with ATXN2-and ATXN2L-specific antibodies, and the localization of both proteins was analyzed by confocal microscopy. As shown in Fig. 2A, ATXN2L localizes in distinct cytoplasmic foci in arsenite-and heat-treated cells, which were also positive for ATXN2. We also included the core SG marker proteins T-cellrestricted intracellular antigen-1-related protein (TIAR) and PABP in this analysis, and detected a co-localization of ATXN2Lpositive foci with TIAR-and PABP-positive foci in arsenite-and heat-treated cells (Fig. S1B, S1C). Since the composition of SGs can vary under different stress conditions [22,23,31], we investigated the localization of ATXN2L under additional stresses, such as osmotic stress induced by sorbitol [32], ER stress induced by dithiothreitol (DTT) [33], and oxidative stress induced by sodium selenite or hydrogen peroxide (H 2 O 2 ) [34,35]. Also under these conditions ATXN2L formed cytoplasmic foci that co-localize with ATXN2-positive foci (Fig. 2B). Moreover, we performed stress experiments in the presence of cycloheximide, a setting preventing SG assembly [21]. For this, HeLa cells were treated with arsenite in the presence of cycloheximide, fixed, and analyzed by confocal microscopy. We observed that ATXN2L and ATXN2 retained their cytoplasmic distribution in HeLa cells with concurrent cycloheximide treatment, whereas in arsenite-treated control cells both proteins are localized to SGs (Fig. 3A). We also performed this experiment with heat-treated cells and made the same observations (Fig. S2A). Finally, we analyzed the localization of ATXN2L under conditions in which SGs disassemble [21]. Subsequent to arsenite treatment, HeLa cells were incubated for additional 90-180 min under normal growth conditions, fixed, and analyzed by confocal microscopy. As shown in Fig. 3B, ATXN2L-positive as well as ATXN2-positive foci dispersed in arsenite-treated cells after 180 min recovery and both proteins were diffusely distributed throughout the cytoplasm. Again, similar findings were obtained under heat stress (Fig. S2B). In conclusion, our results demonstrate that ATXN2L is a bona fide SG component. Altered Intracellular ATXN2L Concentration Influences SG Formation A high-throughput approach revealed an association between ATXN2L and the RasGAP-associated endoribonuclease G3BP [36], which if overexpressed induces the formation of SGs per se [37]. Accordingly, we first confirmed the described association between ATXN2L and G3BP. For this, co-immunoprecipitation experiments were carried out with HEK293T and HeLa cell lysates. As shown in Fig. 4A, we were able to precipitate endogenous G3BP with an ATXN2L-specific antibody (left panel), and endogenous ATXN2L with a G3BP-specific antibody (right panel). Moreover, we investigated whether ATXN2 is also associated with G3BP and carried out further co-immunoprecipitation experiments. We observed that endogenous G3BP was precipitated with an antibody directed against ATXN2 and vice versa (Fig. 4B), demonstrating complex formation between both ataxin proteins and G3BP. Next we addressed the question whether ATXN2L overexpression may possibly induce SGs as the SG marker protein G3BP does [37]. We transfected HeLa cells with the respective plasmids for overexpressing ATXN2L, ATXN2, or G3BP, and analyzed their impact on SG induction using the SG marker protein eukaryotic translation initiation factor 4 gamma (eIF4G) in our confocal microscopy analysis. Of note, we observed that cells with ATXN2L overexpression exhibited eIF4G-positive foci, while eIF4G remained evenly distributed in non-transfected cells (Fig. 4C, upper panel). SG formation was not detected in ATXN2 overexpressing cells using eIF4G staining (Fig. 4C, middle panel). As expected, we observed that overexpressed G3BP co-localized with eIF4G-positive foci (Fig. 4C, lower panel). Thus, overexpression of ATXN2L induces SG formation. In this regard, we showed earlier that reduction of the intracellular ATXN2 level has an impact on SG formation [15]. Consequently, we analyzed the influence of a reduced ATXN2L level on SG formation as well. First we verified the ATXN2L or ATXN2 knock down by immunoblotting and microscopy ( Fig. S3). Then, we transfected HeLa cells with specific siRNA molecules targeting ATXN2L or ATXN2 transcripts, with unspecific non-targeting (NT) siRNA molecules, or left cells untreated (mock), and exposed cells to arsenite stress 72 h after transfection. After fixation the SG marker proteins eIF4G and TIAR were stained and analyzed by confocal microscopy. As shown in Fig. 5A, cells with reduced ATXN2L or ATXN2 level exhibited fewer and smaller eIF4G-positive foci compared to control cells (mock and siNT). To further corroborate and quantify these findings, we additionally performed an automated microscopy approach based on a Cellomics ArrayScan VTI high-content screening platform. This system automatically acquires images of stained cells in multi-well plates. Cells are identified by nuclear staining and fixed object selection parameters, and SGs are quantified within a circular area extending the nuclear region ( Fig. S4; for details please see Materials and Methods). First, we excluded that transfection of siRNA molecules has an impact on cell survival or the nuclear size representing a basic morphological parameter (Fig. 5B). Of note, we observed that in cells with a lowered ATXN2L level the number of eIF4G-and TIAR-positive SGs was significantly reduced to 3464% or 4665% compared to the non-targeting control (p,0.001, n = 5 replicate wells/condition, 6 SD) ( Fig. 5C and S4). As expected, cells with a reduced ATXN2 level also exhibited fewer eIF4G-and TIAR-positive SGs ( Fig. 5C and S4). In addition, the size of SGs was considerably smaller in cells with reduced ATXN2L or ATXN2 level compared to controls (Fig. 5D). Similar results were obtained if G3BP was used as SG marker protein (data not shown). Thus, our findings demonstrate that ATXN2L is important for SG formation. ATXN2L Level Influences P-body Formation We reported earlier that overexpression of ATXN2 influences the presence of microscopically visible P-bodies in HEK293T cells, whereas no obvious effect was observed in case ATXN2 levels were reduced [15]. Consequently, we wanted to analyze the effect of altered ATXN2L levels on P-body formation as well. First, HeLa cells were transfected with the expression constructs RSV-ATXN2L-MYC or pCMV-MYC-ATXN2-Q22 to overexpress ATXN2L or ATXN2, respectively. Cells were then incubated for 48 h to allow expression of proteins, fixed and treated with an antibody directed against the MYC-tag to visualize cells overexpressing ATXN2L or ATXN2. For visualization of P-bodies an antibody directed against the component DDX6 was used. As shown in Fig. 6, cells overexpressing ATXN2L exhibited a diffuse cytoplasmic localization of DDX6, whereas in non-transfected cells DDX6 localized to P-bodies. As reported earlier [15], a comparable reduction in P-body number was observed in cells overexpressing ATXN2 (Fig. 6). Thus, overexpression of ATXN2L affects the presence of microscopically visible P-bodies. We then set out to investigate the effect of a lowered intracellular ATXN2L level on P-body formation. For this, HeLa cells were transfected with specific siRNA molecules for ATXN2L or ATXN2 or with unspecific non-targeting siRNA molecules, and P-body formation was analyzed by confocal and automated highcontent screening microscopy. Confocal microscopy revealed that the number of DCP1-positive P-bodies was strongly reduced in cells with low ATXN2L level compared to control cells treated with non-targeting molecules (Fig. 7A). Moreover, no obvious effect on P-body number was detected in cells with reduced ATXN2 level, which is in consistency with our earlier study using HEK293T cells [15]. For the quantification of P-bodies using our automated microscopy approach we again first determined that a similar number of cells was analyzed, and that no effect occurred on the nuclear size (Fig. 7B). Interestingly, this approach revealed that in cells with reduced ATXN2L level the number of DCP1and DDX6-positive P-bodies was decreased to 1661% and 3864% compared to the non-targeting control, and the size of Pbodies was reduced as well (Fig. 7C, D and S5). We also detected a minor increase in the number and size of microscopically visible Pbodies in cells with low ATXN2 level. In sum, ATXN2L is a regulator of P-body formation. Discussion In this study, we revealed a functional overlap between ATXN2L and ATXN2 with regard to RNA metabolism, since ATXN2L associates with known interaction partners of ATXN2, the RNA helicase DDX6 and PABP [14,15], and with ATXN2 itself. Moreover, we discovered that ATXN2L is a bona fide component of SGs in mammalian cells under different stress conditions. Most importantly, we identified ATXN2L as a regulator of SGs, since ATXN2L overexpression caused induction of SGs, whereas a low ATXN2L level reduced the number and size of SGs. The overexpression or reduction of several proteins have been reported to affect SG formation or SG composition [38], possibly at different steps, since the process of SG formation has been categorized in different stages [22]. The first stage begins Figure 1. ATXN2L is found in association with PABP, DDX6 and ATXN2. A) Scheme of the domain architecture of ATXN2L and ATXN2 highlighting conserved functional motifs. Lines below indicate antibody epitopes for anti-ATXN2L and anti-ATXN2 (BD Biosciences). B) Cell lysates were prepared from HEK293T and HeLa cells as described in Materials and Methods. Co-immunoprecipitation experiments were carried out with an anti-ATXN2L antibody, and precipitated proteins were detected using specific antibodies against PABP, DDX6 or ATXN2 (BD Biosciences). C) HeLa cells were fixed and ATXN2L or ATXN2 were stained with an anti-ATXN2L antibody or an anti-ATXN2 antibody (Sigma, red). SR-splicing proteins were stained using an anti-SR antibody (green). Nuclei were visualized by Hoechst staining (blue). Scale bars represent 20 mm. doi:10.1371/journal.pone.0050134.g001 ATXN2L Regulates Stress Granules and P-Bodies PLOS ONE | www.plosone.org with stalled initiation and ribosome runoff marking ribonucleoprotein complexes as sites where SGs assemble. This stage is followed by the primary aggregation or SG nucleation step initiated by RNA-binding proteins with aggregation-prone properties, such as T-cell-restricted intracellular antigen-1 (TIA-1) and G3BP [22,37,39], followed by the secondary aggregation step. Finally, proteins are recruited through protein-protein interactions [22]. Of note, we observed an association between G3BP and ATXN2L in HeLa cells that has been originally described in a high-throughput analysis [36], suggesting that ATXN2L could be important for the nucleation step as G3BP is. Interestingly, we also detected an association between ATXN2 and G3BP in this study. Even so, we observed that ATNX2 overexpression has no effect on SG induction in HeLa cells under the chosen experimental settings, whereas a reduced ATXN2 level also affects number and size of SGs, consistent with our earlier study using HEK293T cells [15]. Evidence has been provided that posttranslational modifications of proteins are important in the complex dynamic process of SG assembly. Many RNA-binding proteins implicated in the primary nucleation step contain methylatable domains [40,41]. For the fragile 6 mental retardation protein (FMRP), the methylation of its RRG rich domain is important for its function as inducer of SGs [42]. Remarkably, an arginine methylation site at position 361 in the ATXN2L protein has been identified to be methylated, whereas this amino acid position is not conserved in the ATXN2 protein [43]. Interestingly, the overexpression of a phosphomimetic G3BP mutant failed to assemble SGs, while overexpression of a nonphosphorylatable G3BP mutant caused SG formation [37]. Therefore, it can be speculated that differences in posttranslational modifications between ATXN2L and ATXN2 are accountable for the observed results regarding SG formation, a task that will be addressed in the future. On the other hand, we discovered that ATXN2L is a regulator of P-bodies as well, since ATXN2L overexpression and reduction decreases the number and size of P-bodies in HeLa cells, whereas only ATXN2 overexpression had a noticeable effect on P-bodies as reported earlier [15]. Analogous to the assembly of SGs, P-body formation is also affected by the overexpression and reduction of various P-body components [24]. Interestingly, depletion of DDX6 results in the loss of microscopically visible P-bodies [44]. We speculated in our earlier study that the observed mislocalization of DDX6 and loss of microscopically visible P-bodies in cells overexpressing ATXN2 might be based on abnormal protein interactions attributable to the increase in the interaction surface, the LSm/LSmAD domain of ATXN2 [15]. Since this domain is conserved in the ATXN2L protein [10], such a recruitment mechanism is likely to explain the observed effect as well. Regarding the effect on P-body number and size observed in cells with a reduced ATXN2L concentration, it might be of interest that ATXN2L comprises a sequence in the C-terminal region that shows homology to the Pat1 protein family, which is absent in the ATXN2 protein. The Pat1 protein family is conserved in eukaryotes and two Pat1 proteins, PatL2/Pat1a and PatL1/Pat1b, exist in humans [45,46]. PatL1/Pat1b is a Pbody component and the expression of PatL1/Pat1b protein lacking certain domains or its depletion results in loss of microscopically visible P-bodies, whereas PatL1/Pat1b overexpression induces P-body formation [46,47]. Moreover, PatL1/ Pat1b shuttles between the cytoplasm and nucleus, and nuclear PatL1/Pat1b localizes to splicing speckles, indicating that it is implicated in RNA-related processes in both cellular compartments [48]. In this light, it is interesting that we observed a colocalization of ATXN2L and nuclear splicing speckles, suggesting that ATXN2L may function in splicing processes as well. On the other hand, localization of splicing factors is regulated by protein arginine methylation, which also modulates associations between SR-like proteins and splicing factors [49,50]. Nonetheless, lysine acetylation is another posttranslational modification that controls the localization and association of proteins involved in various cellular processes including amongst others gene transcription, translation or splicing [51]. A high-resolution mass spectrometry approach identified ATXN2L as target for lysine acetylation, while ATXN2 was not detected [52]. Again, such a difference could be responsible for the observed effects on P-body formation as well. Further studies are necessary to dissect and define the function of ATXN2L in the cellular mRNA metabolism. Moreover, it will be interesting to further explore the functional interplay between ATXN2L and ATNX2. As mentioned before, functional studies of ATXN1L/Boat, the paralog of the disease-causing protein ATXN1, revealed a suppressive activity in SCA1 disease pathogenesis [5]. Utilizing a SCA1 Drosophila model, Mizutani and colleagues showed that an eye defect, caused by mutant ATXN1, was suppressed by ATXN1L overexpression due to the association of both proteins [53]. Moreover, evidence was provided that the activity of mutant ATXN1 causative for the observed neurotoxic events in a SCA1 knock-in mouse model can be ameliorated by duplication of ATXN1L modifying incorporation of mutant ATXN1 into native protein complexes [5]. Consequently, it will be interesting to explore whether gene dosage compensation or antagonistic behavior between ATXN2L and ATXN2 might occur in SCA2 transgenic animal models and whether and how this impacts SCA2 pathogenesis. Afterwards, the amplified DNA fragment was purified, treated with SalI and NotI, subcloned into the SalI/NotI sites of the expression vector pCMV-MYC (Clontech), and validated by sequencing. Cell Cultivation and Transfection HEK293T, HeLa and SH-SY5Y cells were cultivated in DMEM (Dulbecco's modified Eagle medium, Invitrogen) supplemented with 100 units/ml Penicillin/G-Streptomycin (Biochrom) and 10% fetal bovine serum (FBS, Biochrom) at 37uC and 5% CO 2 . Transfections were carried out in 24-well plates using 1-2 mg RSV-ATXN2L-MYC, pCMV-MYC-ATXN2-Q22, or Figure 3. ATXN2L behaves as a dynamic SG component like ATXN2. A) HeLa cells were concurrently treated with cycloheximide and 0.5 mM sodium arsenite for 1 hour. As controls, cells were only treated with sodium arsenite, or cells were left untreated at 37uC. B) Cells were treated with 0.5 mM sodium arsenite for 1 hour and incubated at normal growth conditions for 90 or 180 min to allow recovery. Subsequently, cells were fixed and stained with antibodies directed against ATXN2L (red) and ATXN2 (BD Biosciences, green). Nuclei were stained with Hoechst (blue). Scale bars represent 20 mm. doi:10.1371/journal.pone.0050134.g003 RNA Interference Experiments HeLa cells were seeded in 24-well plates on glass slides in DMEM supplemented with 10% FBS for 24 hours. Then, 1.2 ml 20 mM siRNA molecules [ATXN2L Stealth Select RNAi Pool (Invitrogen), ATXN2 On Target Plus Smart Pool, On Target Plus Non-targeting Pool (Dharmacon)] and 3 ml Lipofectamine RNAi-MAX transfection reagent (Invitrogen) were mixed, incubated for 10 min and added to the cells. As mock control, untreated cells were included. 72 hours post transfection cells were exposed to oxidative stress by treatment with 0.5 mM sodium arsenite for 1 hour or left untreated. Afterward, cells were fixed with 2% formaldehyde for 10 min and ice-cold methanol for at least 30 min and processed for microscopic analyses as described [15]. For automated microscopy, HeLa cells were seeded in 6-well plates, treated with siRNA concentrations as described above and incubated for 72 hours. Then, cells were treated with trypsin, seeded in 96-well imaging plates (Greiner mClear) for 12-24 hours, and exposed to arsenite-stress or left untreated. For validation of knock down, HeLa cells were seeded in 12-well plates and treated with siRNA concentrations as described above and incubated for 72 hours. Cells were then lysed and proteins were subjected to SDS-PAGE and Western blotting as described above and detected using rabbit anti-ATXN2L (Bethyl, 1:1000), mouse anti-ATXN2 (BD Biosciences, 1:1000), mouse anti-G3BP (Abnova, 1:1000) and mouse anti-GAPDH (Ambion, 1:5000) antibodies. Equal protein loading was confirmed by Coomassie staining. Confocal Microscopy For stress experiments, cells were seeded on glass slides and either incubated with 0.5 mM sodium arsenite, 0.5 M sorbitol, 2 mM dithiothreitol (DTT), 2 mM hydrogen peroxide (H 2 O 2 ), 1 mM sodium selenite (Sigma) or exposed to heat stress at 44uC for 1 hour, while control cells were left untreated. To analyze the effect of cycloheximide on ATXN2L localization, 10 mg/ml cycloheximide was added to arsenite or heat-treated cells. For recovery experiments, medium was removed after the chosen stress conditions, fresh medium was applied, and cells were incubated for additional 90-180 min. Cells were then fixed and proteins were stained with the respective primary antibodies Cells were analyzed using a confocal microscope (LSM 700, Zeiss) on an inverted stand (Axiovert 200 M, Zeiss) using objective Plan-NEOFLUAR 4061.3 oil DIC. Images were prepared using Zeiss software ZEN version 5.5. (1:500, Invitrogen), anti-mouse Alexa Fluor 488 (1:500, Invitrogen)] in 3% BSA/PBS for 1 hour at room-temperature and nuclei were stained with DAPI (Sigma). Plates were scanned using a Thermo Fisher Cellomics ArrayScan VTI. Images of 5126512 pixels were acquired with a 206objective and analyzed using the Cellomics software package (Colocalization V.4 Bioapplication). Cell nuclei were identified by DAPI staining and according to the object identification parameters size: 100-1200 mm 2 , ratio of perimeter squared to 4p area: 1-2, length-to-width ratio: 1-2, average intensity: 50-1000, total intensity: 3610 4 22610 7 . SGs and P-bodies were identified within a circular region extending the nucleus by maximally 20 mm. The object identification parameters for SGs and P-bodies were: 1.5-20 mm 2 , ratio of perimeter squared to 4p area: 1-1.8, length-to-width ratio: 1-1.8, average intensity: 100-1500, total intensity: 5610 3 -5610 4 . Supporting Information Figure S1 ATXN2L interacts and co-localizes with different SG marker proteins. A) Cell lysates were prepared from SH-SY5Y cells and co-immunoprecipitation experiments were carried out with an anti-ATXN2L antibody. Precipitated proteins were detected using specific antibodies against PABP, DDX6 or ATXN2 (BD Biosciences). B, C) HeLa cells were treated with 0.5 mM sodium arsenite or heat-shocked at 44uC for 1 hour with control cells left untreated at 37uC. Afterward, cells were fixed and stained with the corresponding antibodies to visualize ATXN2L (red) and B) TIAR (green) or C) PABP (green), respectively. Hoechst staining (blue) was used for the detection of nuclei. Scale bars correspond to 20 mm. (TIF) Figure S2 ATXN2L behaves as a dynamic SG component under heat stress. A) HeLa cells were heat-shocked in the presence of cycloheximide at 44uC for 1 hour. Heat-shocked cells or cells left untreated at 37uC served as controls. B) HeLa cells were heat-shocked at 44uC for 1 hour and incubated at normal growth conditions for 90 or 180 min to allow recovery. Subsequently, cells were fixed and stained with antibodies directed against ATXN2L (red) and ATXN2 (BD Biosciences, green). Nuclei were stained with Hoechst (blue). Scale bars correspond to 20 mm. (TIF) Figure S3 Efficiency and specificity of ATXN2L and ATXN2 knock-down. A) HeLa cells were left untreated (mock) or transfected with non-targeting (siNT), or ATXN2L-or ATXN2-specific siRNAs, lysed 72 hours post transfection and subjected to SDS-PAGE. Protein level of ATXN2L, ATXN2 (BD Biosciences), G3BP and GAPDH was analyzed using the corresponding antibodies. To show loading of equal amounts of protein, gel was stained using Coomassie blue. B) HeLa cells were transfected with non-targeting (siNT), or ATXN2L-or ATXN2specific siRNA molecules, fixed 72 hours post transfection and stained with antibodies directed against ATXN2L (red) and ATXN2 (BD Biosciences, green). (TIF) Figure S4 Quantification of TIAR-and eIF4G-positive SGs. HeLa cells were left untreated (mock) or transfected with non-targeting (siNT), or ATXN2L-or ATXN2-specific siRNA molecules. 72 h post transfection cells were treated with 0.5 mM sodium arsenite for 1 hour to induce SG formation, fixed, and stained with TIAR-and eIF4G-specific antibodies. Cell nuclei were stained with DAPI. Representative view fields of the automated image analysis are shown. Green encircled nuclei were selected; red encircled nuclei were rejected by the object identification algorithm. Outer cell borders (blue lines) were calculated by extending the nuclear region. TIAR-positive (yellow) and eIF4G-positive (magenta) SGs were quantified within whole cells. (TIF) Figure S5 Quantification of DCP1-and DDX6-positive P-bodies. HeLa cells were left untreated (mock) or transfected with non-targeting (siNT), ATXN2L-or ATXN2-specific siRNA molecules. 72 h post transfection cells were fixed and stained with DCP1-and DDX6-specific antibodies. Cell nuclei were stained with DAPI. Representative view fields of the automated image analysis are shown. Green encircled nuclei were selected; red encircled nuclei were rejected by the object identification algorithm. Outer cell borders (blue lines) were calculated by extending the nuclear region. DCP1-positive (yellow) and DDX6positive (magenta) P-bodies were quantified within whole cells. (TIF)
v3-fos-license
2017-06-10T23:20:17.418Z
2016-01-01T00:00:00.000
59942107
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBYSA", "oa_status": "GOLD", "oa_url": "https://astesj.com/?download_id=3455&smd_process_download=1", "pdf_hash": "62b8c8a839e84393d5fb603761a1c69a50c7544c", "pdf_src": "IEEE", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:155", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "sha1": "62b8c8a839e84393d5fb603761a1c69a50c7544c", "year": 2016 }
pes2o/s2orc
Machine learning framework for image classification Hereby in this paper, we are interested to extraction methods and classification in case of image classification and recognition application. We expose the performance of training models on varying classifier algorithms on Caltech 101 images categories. For feature extraction functions we evaluate the use of the classical SURF technique against global color feature extraction. The purpose of our work is to guess the best machine learning framework techniques to recognize the stop sign images. The trained model will be integrated into a robotic system in a future work. INTRODUCTION The purpose of this paper is twofold. On one hand, it is an introduction to image classification paradigm. On the other hand, it attempts to give a comparison between different feature extraction and classification algorithms. The rest of this paper is structured as follows: Section II provides back-ground information on machine learning. Section III presents a detailed description of the Bag of Features paradigm. It also exposes the SURF detector of image Region of interest (ROI) and highlights the unsupervised K-Means algorithm. In section IV we describe learning and recognition based on Bag of Words (BoW) models. Section V discusses experimentations carried out in the evaluation of different classifiers on image Caltech 101 dataset. In the conclusion we synthetize the obtained results and present the current direction of our research. II. MACHINE LEARNING PARADIGM Machine learning (ML) is an algorithm set especially suited to prediction. These ML methods are easier to implement and perform better than the classical statistical approaches [1]. Instead of starting with a data model, ML learns the relationship between the response and its predictors by the use of algorithms. During the learning phase ML algorithms observe inputs and responses in order to find dominant patterns. In this work we are interested in computer vision. We deploy and test a machine learning based framework in image category classification. To achieve tests we use the Calltech 1 dataset. As the main issue in image classification is image features extraction, we use in our research the Bag of Features (BoF) techniques described in section III. III. BAG OF FEATURES PARADIGM FOR IMAGE CLASSIFICATION Development of Bag of Features (BoF) model is inspired by that of Bag of words (BoW). In document classification fields (text documents), a BoW is a vector that represents the frequency of vocabulary words in a text document. In computer vision, BoW model is used to classify image. In that case image features are considered as words. With the use of BoW model, an image is considered as a document. For "words" definition in images we use three stages: feature extraction, feature description (section III.A), and codebook generation (section III.B) [5][6][7][8][9][10][11]. A. Speed Up Robust Features (SURF) Technique For features detection and extraction, we use the Speed Up Robust Features method. Salient features and descriptors are extracted from each image. This method is chosen over Scale-Invariant Feature Transform (SIFT) due to its concise descriptor length. In SURF, a descriptor vector of length 64 is generated using a histogram of gradient orientations in the local neighborhood around each key-point [12]. To analyze image and extract features, SURF considers the processing of grey-level images only as they contain enough information [13]. In this paper, the SURF implementation is provided by the Matlab R2015a library. B. Descriptors clustering: K-Means After extracting descriptors from the training images, unsupervised learning algorithms, such as K-means, are used in order to group them into N clusters of visual words. The metric used to categorize a descriptor into its cluster centroid is the "Euclidean distance". For this purpose, each image extracted descriptor is assigned to its closest cluster centroid. In order to generate the histogram of counts, the cluster centroid's number of occupants is incremented each time a descriptor is mapped into it. At the end of this process, each image is characterized by a histogram vector of length N. To ensure the invariance of this method with respect to the number of descriptors used, it is essential to normalize each histogram by its L2-norm. To group the descriptors and construct the N visual words we use the K-means clustering. This approach is selected over Expectation Maximization (EM) as many experimental methods have confirmed the computational efficiency of K-means with respect to EM [14]. IV. LEARNING AND RECOGNITION BASED ON BOF MODEL Research in computer vision field have led to many learning approaches to leverage the BoF model for the purpose of image recognition. For multiple label classification problems, the evaluation metric which is used is the confusion matrix. A confusion matrix is defined as a particular table making it possible to visualize the accuracy of a supervised learning algorithm. Matrix columns symbolize the instances in a predicted class whereas rows represent the instances in an actual class (or vice-versa). The appellation is due the fact that it makes it simple to see if the system confuses two categories (i.e. mislabeling one as another) [15]. In this work we investigate many supervised learning algorithms such as: SVM [16], k-nearest neighbors [17] and Boosted Regression Trees [18,19] to classify an image. Each image in dataset is encoded by its "BoF" histogram vector as shown in In the following we provide a summary of different experiments that we use to evaluate the performance of our image classification machine learning framework. Our results are reported on Calltech 101 image dataset to which we have added some new images of existing categories. We are interested in stop sign category recognition. A. SURF Local Feature Extractor and Descriptor In this experiment, we test the local feature extractor SURF and its robustness in matching features even after rotation and scaling image (Fig. 3, Fig. 4). B. Bag of Features Image Encoding We use BoF to encode each image of the dataset into a vector feature which rep-resents the histogram of visual word occurrences contained in it (Fig. 5). C. Classifier Training Process The encoded training images are fed into a classifier training process to generate a predictive model. In this section, we are interested in measuring the classifier average accuracy and its confusion matrix. Image categories from Calltech101 dataset used are described in TABLE I. 1) Experiment 1: Classifier Evaluation Based on the Number of Image Categories. For tests we use the SURF extractor and the Linear SVM classifier. We chose 30% among images for training and the remaining for validation. The obtained confusion matrixes are shown in TABLE II. , TABLE III. , TABLE IV. and TABLE V. TABLE II As shown in Fig. 6, we notice that the average accuracy of the classifier is influ-enced by the number of categories in training dataset. This metric is lower when the numbers of sets increase. 2) Experiment 2: Evaluating Image Category Classifier Using Color Extractor. The Linear SVM classifier is applied on sign stop, ferry and laptop categories. We use a global color features extractor instead of the SURF technique. It is reported that the achieved average accuracy is 0.76, as shown in TABLE VI. We notice that in our approach is better to use a local features extractor (SURF) than a global features extractor. This result is expected as the global features extraction technique is better with scene categorization and not for object classification [20]. 3) Experiment 3: Training Learner Evaluation. We next fix the number of categories to 4, the feature extraction technique to SURF and evaluate models on varying the classifier algorithm: SVM, KNN and ensemble classifier categories. We then generate the histogram (Fig.7) of the average accuracy based on the training classifier. Measurements show that the image classification process performs better when we use a likehood SVM. It's reported that the Cubic SVM yields average accuracy which reaches 90%. The KNN techniques offer an average accuracy around 65%. Among the ensemble classifier trainers (2 last tested algorithms) the bagged trees achieves the best accuracy. VI. CONCLUSION In this paper, we related the different techniques and algorithms used in our machine learning framework for image classification. We presented machine learning state-of-the-art applied to computer vision. We introduced the Bag of Features paradigm and highlighted the SURF as its technique for image features extraction and description. Through experimentations we proofed that using SURF local feature extractor method and a SVM (cubic SVM) training classifier performs best average accuracy. In test scenarios we focused on stop sign image as we project to apply the trained classifier in a robotic system.
v3-fos-license
2016-06-17T16:33:58.479Z
2014-09-04T00:00:00.000
12854268
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2014.00648/pdf", "pdf_hash": "2bc8c59e4fe95cad74a5536ae3ec20c52aee1fd5", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:156", "s2fieldsofstudy": [ "Biology", "Medicine" ], "sha1": "2bc8c59e4fe95cad74a5536ae3ec20c52aee1fd5", "year": 2014 }
pes2o/s2orc
Disruption of the brain-derived neurotrophic factor (BDNF) immunoreactivity in the human Kölliker-Fuse nucleus in victims of unexplained fetal and infant death Experimental studies have demonstrated that the neurotrophin brain-derived neutrophic factor (BDNF) is required for the appropriate development of the central respiratory network, a neuronal complex in the brainstem of vital importance to sustaining life. The pontine Kölliker-Fuse nucleus (KFN) is a fundamental component of this circuitry with strong implications in the pre- and postnatal breathing control. This study provides detailed account for the cytoarchitecture, the physiology and the BDNF behavior of the human KFN in perinatal age. We applied immunohistochemistry in formalin-fixed and paraffin-embedded brainstem samples (from 45 fetuses and newborns died of both known and unknown causes), to analyze BDNF, gliosis and apoptosis patterns of manifestation. The KFN showed clear signs of developmental immaturity, prevalently associated to BDNF altered expression, in high percentages of sudden intrauterine unexplained death syndrome (SIUDS) and sudden infant death syndrome (SIDS) victims. Our results indicate that BDNF pathway dysfunctions can derange the normal KFN development so preventing the breathing control in the sudden perinatal death. The data presented here are also relevant to a better understanding of how the BDNF expression in the KFN can be involved in several human respiratory pathologies such as the Rett's and the congenital central hypoventilation syndromes. INTRODUCTION To accomplish all vital homeostatic functions, a proper respiratory rhythmogenesis is crucial at birth. The central ventilatory rhythm is produced and coordinated by a network of neuronal centers located in the brainstem (Ramirez and Richter, 1996;Viemari et al., 2003;Wong-Riley and Liu, 2005). Experimental studies have demonstrated that during prenatal development this circuitry undergoes marked maturational changes which include modifications in the morphological, biochemical and electrophysiological properties of specific neurons of the respiratory components (Ritter and Zhang, 2000;Feldman et al., 2003;Alheid and McCrimmon, 2008), as well as enhancement of their synaptic interactions (Paton and Richter, 1995), so that it will be fully developed and operating at birth. The pontine Kölliker-Fuse (KF) is a key nucleus in the respiratory network for primary breathing, with strong implications in pre-and postnatal life. During intrauterine life, the KF inhibits the central and peripheral chemoreceptors (which are already fully developed and potentially functional), and therefore any respiratory reflex. At birth, the KF abruptly reduces its inhibitory effects and becomes active as a respiratory center that starts up the ventilatory activity through extensive afferent and efferent connections with the other respiratory-related structures (Dutschmann et al., 2004;Ezure, 2004;Dutschmann and Herbert, 2006). Studies by Erickson et al. (1996) firstly demonstrated that brain-derived neurotrophic factor (BDNF), a member of the neurotrophin class of growth factors, is required for the development of normal breathing in mice, both for regulating the maturation of specific neurons in brainstem centers involved in respiratory control, and for stimulating their connections. Other authors have since studied this topic (Balkowiec and Katz, 1998;Katz, 2003;Lu, 2003;Kron et al., 2007a,b;Ogier et al., 2013). In particular, Kron et al. (2007a,b) showed the specific involvement of BDNF in both fetal GABAergic inhibitory and postnatal glutamatergic excitatory synaptic transmission in the KF of rats. This behavior of the BDNF is very likely similar in humans but the related molecular mechanisms of expression and functions are largely unknown. The present study was undertaken to specifically test the hypothesis that the KF nucleus is a target of BDNF signaling to foster a normal breathing pathway. In particular, we supposed that the BDNF can contribute to the inhibitory and stimulatory actions on breathing exerted by the KFN before and after birth, respectively. To approach this issue we applied BDNF immunohistochemistry to serial histological sections of brainstem, particularly of the specific specimen where the KF nucleus (KFN) is located, from subjects who died perinatally of both known and unknown causes. The aims of the research were: firstly to evaluate whether and how the KF neurons display BDNF immunoreactivity; secondly, to investigate whether developmental defects of the KFN, that are not uncommon in sudden perinatal deaths (Lavezzi et al., 2004a,b,c), are associated with alterations of BDNF expression. The study protocol included immunohistochemical detection in those same cases of apoptosis, given the important role of the BDNF also in neuronal survival and/or programmed death occurring during the development of the nervous system (Ricart et al., 2006), and of the Glial Fibrillary Acid Protein (GFAP), a marker of reactive gliosis in neurodegenerative processes (Sofroniew and Vinters, 2010;Robel et al., 2011). METHODS A total of 45 brains were studied, from 27 ante-partum stillbirths (25-40 gestational weeks-gw, mean age: 37.5 gw), and 18 infants who survived for periods ranging between 1 h and 6 months (mean age: 3.2 months). All these cases were sent to the "Lino Rossi" Research Center of Milan University according to the application of the Italian law n.31/2006 "Regulations for Diagnostic Post Mortem Investigation in Victims of sudden Infant Death Syndrome (SIDS) and Unexpected Fetal Death." This law decrees that all infants suspected of SIDS who died suddenly in Italian regions within the first year of age, as well as all fetuses who died without any apparent cause (SIUDS: Sudden Intrauterine Unexplained Death Syndrome), must be submitted to a thorough diagnostic post-mortem investigation. CONSENT Parents of all the victims of the study provided written informed consent to autopsy, and approval was obtained from the Milan University "Lino Rossi" Research Center institutional review board. The victims were subjected to a complete autopsy, including examination of the placental disk, umbilical cord and membranes in fetal deaths. In all cases an in-depth histological examination of the autonomic nervous system was made, according to the protocol provided by the Law 31 (Matturri et al., 2005(Matturri et al., , 2008. Briefly, examination of the brainstem was performed, after fixation in 10% phosphate-buffered formalin, through the sampling of three specimens, as shown in Figure 1. The first specimen, ponto-mesencephalic, includes the upper third of the pons and the adjacent portion of mesencephalon. The second extends from the upper medulla oblongata to the adjacent portion of the caudal pons. The third specimen takes as reference point the medullary obex, a few mm above and below it. The three samples were embedded in paraffin and serially cut at intervals of 30 µm. For each level, twelve 5 µm sections were obtained, two of which were routinely stained using alternately hematoxylin-eosin and Klüver-Barrera stains, while three sections were treated for immunohistochemical detection of BDNF, apoptosis and GFAP, respectively. The remaining sections were saved for further investigations and stained as deemed necessary. The routine histological evaluation of the brainstem was performed on the locus coeruleus and the Kölliker-Fuse nucleus (the focus of this study) in the first specimen; on the parafacial/facial complex, the superior olivary complex and the retrotrapezoid nucleus in the second sample, and on the hypoglossus, the dorsal motor vagal, the ambiguus, the pre-Bötzinger, the inferior olivary, the arcuate nuclei and the solitary tract complex in the third medullary sample. In 29/45 cases, after the in-depth autoptic examination, the death remained totally unexplained. A diagnosis of "SIUDS" was made for 19 fetuses who died suddenly after the 25th gestational week, and a diagnosis of "SIDS" for 10 infants who died within the first 6 months of life. In the remaining 16 cases, 8 stillbirths, and 8 infants, a precise cause of death was formulated at autopsy. These cases were regarded as "controls." Table 1 summarizes the subjects analyzed in this study, indicating the sex distribution, range of ages and death diagnoses. For each case, a complete clinical history, with particular reference to maternal lifestyle (including cigarette smoking, alcohol and drug abuse), was collected. None of the mothers had any significant pathology. Thirteen of the 29 SIUDS/SIDS mothers (45%) were active smokers before and during the pregnancy, smoking >3 cigarettes/day. The remaining 16 mothers (55%) reported no history of cigarette smoking. Three mothers of victims in the control group (19%) had a smoking habit, while 13 mothers (81%) were non-smokers. Immunohistochemical techniques BDNF detection. Sections from paraffin-embedded tissue blocks were stained using commercially supplied rabbit monoclonal antibodies against the brain-derived neurotrophic factor, also known as BDNF [abcam (EPR1292), ab108319]. Slides were boiled for the antigen retrieval in EDTA buffer, using a microwave oven, at 600 W for 3 times at 5 min each, and finally cooled. The antibody was diluted 1:140 in PBS. A standard ABC technique avidin-biotin complex (Vectastain elite ABC KIT, PK-6101) was used with HRP-DAB to visualize and develop the antigenantibody reaction. All the slides selected for this study were submitted at the same time to the immunohistochemical procedures, with particular attention to the simultaneous incubation in the DAB-peroxidase solution to avoid differences in the immunostaining. Sections were counterstained with Mayer's hematoxylin, than coverslipped. A set of sections from each group of the study was used as negative control. Precisely, the tissue samples were stained using the same procedure but omitting the primary antibody in order to verify that the immunolabeling was not due to nonspecific labeling by the secondary antibody. In fact, if specific staining occurs in negative control tissues, immunohistochemical results should be considered invalid. Quantification of BDNF immunohistochemical expression. The degree of positive immunoreactivity in the brainstem histological sections was defined for every case by two independent and blinded observers as the number of neurons with unequivocal brown immunostaining, divided by the total number of cells counted in the same area, expressed as percentage (BDNF immunopositivity index: BDNF-Index). BDNF-Index was classified as: "Class 0" for absolutely no staining (negativity); "Class 1" when the index of strong immunopositive neurons was < 10% or even >10% but only of slightly immunopositive cells (weak positivity); "Class 2" with a percentage of intense immunopositive neurons between 10 and 30% (moderate positivity); "Class 3" with a percentage of intensely brown neurons >30% of the counted cells (strong positivity). All slides were evaluated using a Nikon E600 microscope with Axioplan objectives and identical ND (Neutral Density) filters. Images were acquired at different magnification using a Microlumina Ultra Resolution Scanning Digital Camera. Apoptosis detection. To detect cells undergoing apoptosis, we used the technique of Terminal-Transferase dUTP Nick End labeling technique (TUNEL Apoptag plus peroxidase in situ Apoptosis detection kit, S7101, Chemicon). This identifies early nuclear DNA fragmentation by specific binding of terminal deoxynucleotidyl transferase (TdT) to 3 -OH ends of DNA. Sections were pretreated with proteinase k (20 µg/ml) for 15 min. Endogenous hydrogen peroxidase activity was quenched in 3% hydrogen peroxide. After a series of rinsing, nucleotides labeled with digoxigenin were enzymatically added to the DNA by TdT. The incubation was carried out for 60 min the labeled DNA was detected using anti-digoxigenin-peroxidase for 30 min. The chromogen diaminobenzidine tetra hydrochloride (DAB) resulted in a brown reaction product. Incubation without TdT served as the negative control. GFAP detection. To reveal the reactive astrocytes, sections were deparaffinized and washed in PBS. After blocking endogenous peroxidase with 3% H 2 O 2 , the slides were pretreated in a microwave-oven using a citrate solution (pH 6). Then the sections were incubated overnight with primary monoclonal antibody NCL-GFAP-GA5 (anti GFAP, Novocastra, Newcastle Tyne, United Kingdom) at a dilution of 1:300. Immunohistochemical staining was performed with the peroxidase-antiperoxidase method and the avidin-biotin complex technique (ABC Kit, Vectastain, Vector Laboratories Inc., Burlingame, CA, U.S.A.). Diaminobenzidine (DAB, Vector Laboratories Inc., Burlingame, CA, U.S.A.) was used as chromogen substrate and counterstained with light hematoxylin. Negative controls of the same tissue were done using PBS instead of primary antibody. neurotransmitter GABA, was performed by anti-parvalbumin mouse monoclonal IgG antibody (Millipore Chemicon International cat. MAB1572). Sections, after deparaffinizing and rehydrating, were immersed and boiled in a citrate solution pH 6.0 for the antigen retrival with a microwave oven, having first blocked endogenous peroxidase by 3% hydrogen peroxide treatment. Then sections were incubated with the primary antibody overnight (1:1000 dilution) in a wet chamber. Samples were washed with a PBS buffer and processed with a usual avidinbiotin-immunoperoxidase technique and finally counter-stained with Mayer's Hematoxylin and coverslipped. Negative controls were prepared omitting the primary antibody and replacing only with PBS. Statistical analysis The statistical significance of direct comparisons between the groups of victims was determined using analysis of variance (ANOVA). Statistical calculations were carried out with SPSS statistical software. The differences were statistically significant if p-value was <0.05. Control group Morphology. We firstly examined the KFN in serial histological sections from the first brainstem sample (Figure 1) of all 16 subjects who died of known causes, with the aim of delineating the normal features of this nucleus in human perinatal life. This examination reaffirmed in more detail our previous reports on this topic (Lavezzi et al., 2004a,b), establishing that the KFN, already well developed at 25 gestational weeks, extends longitudinally from the rostral pons to the lower portion of the mesencephalon, up to the level just where the caudal pole of the red nucleus appears. Its cytoarchitecture is well visible in the more cranial transverse sections of the pons, namely those bordering the caudal mesencephalon, identifiable by the presence of the superior cerebellar peduncle decussation. Figure 2 represents a transverse cranial section of rostral pons showing the location of the KFN. The KFN appears as a group of large neurons, located between the peduncle crossing and the medial lemniscus. These neurons show a distinct, eccentric nucleus with an evident nucleolus, and abundant cytoplasm with Nissl substance located at the cell periphery. On the basis of the neuronal arrangement, it is possible to define two KF subnuclei: the "compactus subnucleus," consisting of a cluster of a few large neurons, and the adjacent "dissipatus subnucleus" with scattered neurons. Intermixed with these large neurons, smaller cells (interneurons and astrocytes) are visible (Figure 3). Interneurons in particular, in contrast to the typical morphological features of the large KF neurons, had, beyond the lower cell body size, wrinkled nuclei, indistinct nucleoli, inconspicuous Nissl bodies and small extension of the dendritic arbor. In addition, their presence has been confirmed by the specific immonohistochemical method using the anti-parvalbumin antibody as marker for inhibitory GABAergic interneurons. BDNF immunohistochemistry. A noteworthy observation was that the KF neurons display BDNF immunoreactivity prevalently in fetal life (Figure 4). An intense expression with dark brown staining in the cytoplasm of neurons ("Class 3" of BDNF-Index) was seen in all the 8 fetuses and in 1 newborn, who died in the first hours of life. Only in a 3 month-old victim a weak immunopositivity ("Class 1" of BDNF-Index) of the KF neurons was observed. Apoptosis immunohistochemistry. Significant apoptosis, after application of the TUNEL method, was absent in the KFN of the fetal control group. Programmed cell death was, instead, confined to several interneurons in the KF area in 7 of the 8 control infants (Figure 5). GFAP immunohistochemistry. Rare astrocytes not expressing detectable levels of GFAP with none sign of immunopositive reactive astrogliosis were found in the KFN area of all the control cases. Morphology. In 14 subjects who died suddenly (7 SIUDS and 7 SIDS), the KF structure did not differ from those of agematched controls. However, a decreased number of KF neurons FIGURE 5 | Apoptotic interneurons in the Kölliker-Fuse nucleus in a newborn of the control group (2 month-old). Full arrows indicate immunopositive interneurons; empty arrows indicate some of the immunonegative large neurons. TUNEL metod immunostaining. Scale bar = 10 µm. (hypoplasia) was observed in 9 late fetal deaths aged from 38 to 40 gestational weeks and in 3 newborns who died within the first day of life (Figure 6). In 3 stillbirths (28-32 gestational week-olds), the KFN was not detectable, unlike to the age-matched control fetuses in which the KF is already well-developed, thus allowing a diagnosis of "agenesis" of this nucleus. BDNF immunohistochemistry. Results consistent with the control group observations were found in 17 cases (positive BDNF immunoexpression in 10 SIUDS and negative immunoexpression in 7 SIDS). An irregular BDNF expression was observed in the remaining 12 cases: negativity (Class 0 of BDNF-Index) was found in the three fetuses with KFN agenesis and in 4 sudden deaths with hypoplasia; very weak to moderate immunopositivity . Klüver Barrera staining. Scale bar = 10 µm. ml, medial lemniscus; scpd, decussation of the superior cerebellar peduncles. ("Class 1" and "Class 2" of BDNF-Index), only distributed at the cell periphery, was highlighted in 2 SIUDS cases (Figure 7) and in 3 newborns died in the first hours of life with KFN hypoplasia. Collectively, a significantly greater proportion of BDNF signaling dysfunctions were observed in SIUDS/SIDS victims compared with controls (41 vs. 12%, p < 0.05). Apoptosis immunohistochemistry. Contrary to what was observed in controls, positive TUNEL staining was detected in the KF interneurons of 8 SIUDS cases, and negative apoptotic signals in 4 SIDS. Frequently these results were associated to negative/weak BDNF expression and KFN hypoplasia. GFAP Immunohistochemistry. Numerous reactive astrocytes, characterized by high-level expression of GFAP immunoreactivity in spongiform hypertrophic cell bodies and striking increase in the number, length and thickness of GFAP-positive processes, were found nearby the large neurons of the KFN in about 40% of both SIUDS and SIDS victims (8/19 and 4/10, respectively) (Figure 8). In Table 2, all the results related to the KFN are reported. Overall, we found one or more combinations of KFN pathological findings (i.e., hypoplasia, gliosis, irregular manifestation of the BDNF and apoptosis) in 19 victims of sudden death (14 SIUDS and 5 SIDS) and only in 1 control group case. Thus, a significantly greater proportion of neurological alterations of the KFN was observed in victims of sudden death as compared with controls (p < 0.01). OTHER BRAINSTEM PATHOLOGICAL RESULTS A subset of SIUDS (11/19) cases showed hypoplasia of the arcuate and the pre-Bötzinger nuclei in the medulla oblongata, and of the facial parafacial complex in the caudal pons. In 4 cases these alterations were concomitant with KFN hypoplasia. In the SIDS group the most frequent alteration was hypoplasia of the arcuate nucleus (6/10). Also in 25% of the control cases (4/16), both fetal and infant deaths, hypoplasia of the arcuate nucleus was detected. In the majority of cases (both control and SIUDS/SIDS groups), regardless of the expression pattern in the KFN, the BDNF was also densely expressed in the ventrolateral subnucleus of the solitary tract complex, in the pre-Bötzinger nucleus in the medulla oblongata, and in the retrotrapezoid nucleus in the caudal pons. CORRELATION OF KFN FINDINGS WITH SMOKE EXPOSURE On the whole, the morphological and immunohistochemical alterations of the KFN resulted significantly related to maternal smoking (p < 0.01). In fact, in 10 of the 13 victims of sudden death with a smoker mother (77%) one or more developmental defects of the KFN were present. Similarly, in the control group, 2 of the 3 newborns (67%) showing anomalous BDNF positive immunoexpression, had a smoking mother, confirming the role of smoke absorption in the neuropathogenetic manifestation of this growth factor. DISCUSSION From the experimental reports in literature there is clear evidence that the neurotrophin BDNF is important for the maturation of the respiratory network during the prenatal period and for the primary generation of the respiratory rhythm at birth. In addition, BDNF has an important role in the stabilization of central breathing that occurs after birth (Balkowiec and Katz, 1998;Katz, 2003;Lu, 2003;Ogier et al., 2013). In this study we examined the expression of BDNF in humans, mainly in the KFN, an essential component of the respiratory network with important implications before and after birth. Our research was performed on a wide set of fetuses and newborns who died of both known (controls) and unknown causes (SIUDS/SIDS). Only limited data are available at this moment regarding the localization and the cytoarchitecture of the human KFN. In this study we have shown, validating our previous reports (Lavezzi et al., 2004a,c), that the KFN is well analyzable in histological sections from the dorsolateral area of the cranial pons, ventral to the superior cerebellar peduncle decussation, Our data are supported by the study of Pattinson et al. (2009). These authors, through combined functional and structural magnetic resonance imaging techniques applied to brainstem of human volunteers, observed, after chemical stimulation of breathing, an intense activation in a brain area of the rostral dorsal pons corresponding to the KFN localization highlighted by us. In the selected histological sections of control cases we first defined the cytoarchitecture of this nucleus, consisting of a small population of large neurons intermixed with smaller interneurons and glial cells. Then, we found developmental changes in the cytoarchitectural organization and in the immunoexpression of BDNF in the KFN, particularly in sudden fetal deaths (SIUDS). This was illustrated in particular by the prevalent negativity of BDNF signals, unlike the intense expression found in age-matched fetal controls, that coincides with the peculiar BDNF function as breathing-inhibitor shown in experimental studies in prenatal life (Kron et al., 2007b). The disappearance of positivity observed after birth, above all in control cases, has been interpreted, on the contrary, as a necessary step to allow for breathing to start. The BDNF immunopositivity, even if weak, observed in the KFN of several SIDS cases can therefore indicate a continuation of the prenatal inhibitory activity after birth, leading to severe respiratory deficits and consequently death. A remarkable observation was that BDNF deficiency in the KF neurons of SIUDS and its expression in SIDS victims were frequently associated with a delayed morphological maturation of this nucleus (KFN hypoplasia), thus demonstrating that alterations of BDNF expression can derange the KF maturation. Therefore, the data obtained from this study essentially indicate that: (1) BDNF has direct effects on breathing inhibition in fetal life; (2) BDNF is required for normal KFN development in intrauterine life but is not involved in starting the breathing and the modulation promoted by the KFN after delivery. Indeed, BDNF expression in newborns seems to hinder the ventilatory activity. Our considerations are, in general, in agreement with the report by Tang et al. (2012), providing evidence of an abnormal expression of BDNF in respiratory-related brainstem structures (although the KFN was not included in their observations) in SIDS as compared to non-SIDS infants. The results highlighted by the application of TUNEL immunohistochemistry are also interesting. We observed, in fact, the presence of apoptotic interneurons in the KFN of newborns, mainly of the control group. Interneurons are typically small cells distributed in the central nervous system (CNS), that form short-distance connections with neighboring large neurons through the use of the neurotransmitter GABA to provide a synaptic inhibitory control (Sato et al., 2014). During gestation the activity of the respiratory muscles is disabled by the growing number of inhibitory interneurons, densely distributed above all in the KFN area. At physiological delivery, these interneurons are generally stressed, consuming plenty of oxygen and glucose (Funk et al., 2008). They are then eliminated by apoptosis, as we have tested in the KFN of control subjects, thus allowing the breathing to start (Morpurgo et al., 2004). The high percentage of SIDS cases without apoptotic signals in the KFN observed in this study could be attributable to the persistence, after birth, of active interneurons that preserve their inhibitory effect on the ventilation, thus leading to a fatal conclusion. The reactive GFAP-immunopositive astrocytes we found around the KF neurons in SIUDS/SIDS victims should also be underlined. Astrocytes are the major glial cell population within the CNS. They play important physiological roles in brain functions through the release of several neurotrophic factors that represent the primary event in the maintenance of CNS homeostasis, providing support and protection to neurons (Hughes et al., 2010;Jones and Bouvier, 2014). In addition, astrocytes have the ability to rapidly react to various noxious neuronal insults, leading to vigorous astrogliosis (Gourine et al., 2010;Robel et al., 2011). Subsequently, after severe activation, astrocytes secrete neurotoxic substances and express an enhanced level of glial fibrillary acidic protein (GFAP), which is mostly considered to be a marker of hypoxic states (Chekhonin et al., 2003). Hypoxia induces the proliferation of activated astrocytes particularly in specific brain regions that play an important role in the physiological control of breathing, such as the KFN (Becker and Takashima, 1985;Norenberg, 1994). In our study an important risk factor responsible for hypoxic conditions has been identified in cigarette smoke, given the high incidence of smoker mothers. In cases of cigarette smoke absorption in pregnancy, nicotine and carbon monoxide easily cross the placental barrier and bind to the fetal hemoglobin. The resulting carboxyhemoglobin is not able to release oxygen, consequently inducing alterations of the physiological development of those organs and tissues most susceptible to hypoxic damage, including the brain (Lichtensteiger et al., 1988;Levin and Slotkin, 1998;Gressens et al., 2003). Besides, nicotine is one of the few lipidsoluble substances that can cross the blood-brain barrier and act directly by inducing specific molecular alterations in DNA, RNA, and proteins of the nervous cells (Gospe et al., 1996). Smoking may also be responsible for aberrant BDNF behaviors. In fact, we frequently observed an altered expression of BDNF in SIUDS/SIDS subjects with smoker mothers. In support of this hypothesis is the demonstration that a multitude of stimuli, including nicotine absorption, alter BDNF gene expression, by modifying BDNF mRNA in specific neuronal structures (Lindholm et al., 1994). It is important to point out several limitations of this study. Firstly, the relatively small number of cases of SIUDS and particularly of SIDS and controls, that allows us to formulate hypotheses but not gain confirmation. Secondly, failure to make a specific examination of the tropomyosin-related kinase B (TrkB) receptor, a protein with high affinity toward several neurotrophins, that interacts in particular with BDNF to mediate neuronal growth during CNS development (Soppet et al., 1991;Aoki et al., 2000;Tang et al., 2010). Such research is now in progress in our Research Center, however, as we are planning to examine the behavior of this specific receptor in a wider set of perinatal deaths. In conclusion, we can state that BDNF appears to be required for the development of the KFN in human fetal life and for its inhibitory function on the central respiratory output occurring in utero. Interestingly, the BDNF role in the breathing mechanism is different in fetal and neonatal life. In fact, in contrast with the indispensable high BDNF expression in KF neurons in utero, its manifestation in newborns results associated to severe impairment of the respiratory activity. In any case, the altered behavior of the BDNF system observed in SIUDS/SIDS victims may hinder the KFN function, compromising the respiratory control in both fetal and postnatal life. Our findings differ from the reports in literature related to experimental studies, showing BDNF signaling in KF neurons also after birth. It should be noted, however, that this different behavior in man only affects the KFN. In fact, BDNF was expressed in other respiratory structures (in particular the solitary tract complex, the pre-Bötzinger and the retrotrapezoid nuclei) in all stages of life. These observations emphasize the need to intensify the studies on BDNF expression in the human brainstem. Finally, we point out that BDNF, as a neuromodulator in human KF, may have a physiological importance and clinical relevance in a wide range of human developmental disorders of the breathing. A dysregulation of BDNF expression has been linked with Rett's syndrome (RTT), for example, a childhood neurological disease associated with a mutation in the gene MeCP2 and serious impairment of the ventilatory activity (Rett, 1986;Katz et al., 2009). Mironov et al. (2009) demonstrated, in a mouse model of RTT, that BDNF deficits are most prominent in neuronal structures of the brainstem deputed to autonomic and respiratory control. Importantly, Wang et al. (2006) in the nucleus of the solitary tract, a nucleus proposed as source for BDNF release in the KFN. In the idiopathic congenital central hypoventilation syndrome (CCHS), a rare disorder characterized by an abnormal control of respiration despite the absence of neuromuscular or lung disease or an identifiable brainstem lesion (Weese-Mayer et al., 1992), several authors have shown an important contribution of BDNF system dysfunction to the pathophysiological mechanism leading to this specific respiratory deficiency. Chiaretti et al. (2005), in particular, found a reduction of BDNF in the cerebrospinal fluid of patients affected by Ondine's curse compared with the mean level in the control group. Weese-Mayer (Weese-Mayer et al., 2002) highlighted a mutation of the BDNF gene, although not very frequent, in children with CCHS, supporting the relevance of this identified mutation in the respiratory control. A further interesting point to verify will be whether the link between BDNF deregulations and breathing activity perturbations highlighted in these pathologies originates from an impaired synaptic transmission and developmental alterations of the KFN. AUTHOR CONTRIBUTIONS Anna M. Lavezzi planned the study, analyzed the data and wrote the manuscript with collaborative input and extensive discussion with Luigi Matturri. Melissa F. Corna carried out the immunohistochemical and the histochemical study and participated in the evaluation of the results. All Authors read and approved the final manuscript.
v3-fos-license
2018-12-26T18:31:39.552Z
2017-03-15T00:00:00.000
67033653
{ "extfieldsofstudy": [ "Computer Science" ], "oa_license": "CCBY", "oa_status": "HYBRID", "oa_url": "http://www.clausiuspress.com/assets/default/article/2017/03/14/article_1489546182.pdf", "pdf_hash": "565921d15953ec3c588911ca546d07425a1d3c85", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:162", "s2fieldsofstudy": [ "Computer Science" ], "sha1": "565921d15953ec3c588911ca546d07425a1d3c85", "year": 2017 }
pes2o/s2orc
FPGA Implementation of LDPC Decoder with Low Complexity According to the limitation of resources on satellite, this paper focuses on the design and realization of low complexity LDPC decoder. A new implementation method of LDPC decoder is proposed, a various kinds of LDPC codes could be supported. Finally, a (4096, 2048) LDPC decoder is implemented for verification based on a Xilinx Vertex4 xc4vsx35 FPGA platform. The implementation result shows that only 4% FPGA logic resources were consumed and the maximum clock frequency could achieve 180MHz. Introduction The low-density parity-check (LDPC) code was proposed by Gallager [1] in 1962.Such codes are linear block codes with low density parity check matrix.Since its rediscovery in the 1990s, the LDPC code has been widely applied to DVB-S2 and wireless metropolitan area networks (IEEE 802.6) because of its excellent error correction performance and easy-to-implement decoding structure.A corresponding LDPC coding scheme is proposed by the Consultative Committee for Space Data System for near-earth and deep-space application environments.Existing research on LDPC codes focuses mainly on the design and engineering realization of LDPC encoder and decoder.LDPC decoder design with low-complexity realization is the basis and prerequisite of LDPC code for satellite applications customized for the limited payload resources of the satellite. The three main realization structures of LDPC decoder are serial, full parallel, and partial parallel structures.The serial decoding structure presents low realization complexity but long decoding delay and low throughput.The full parallel structure exhibits low decoding delay, large throughput, and high complexity; however, it is difficult to achieve the hardware implementation.The partial parallel structure can achieve a good balance between realization complexity and decoding speed.Thus, this decoding structure is commonly applied.In 2006, Verdier [2] proposed a generalized decoding structure based on the modified minimum sum (Min-sum) algorithm.A parallel decoding algorithm of specific pseudo-random rule LDPC codes and irregular LDPC codes is implemented on the ALTERA APEX 20 Ke FPGA platform.The decoder consumes 1591 logic cells and 160 KB of space, and the decoder can work at the maximum clock rate of 37.89 MHz.Arnone et al. [3] analyzed and compared the implementation complexity and performance difference of decoder based on logarithmic sum-product and the simplified soft Euclidean distance iterative decoder.A low-complexity FPGA realization structure is thus proposed for the two decoding algorithms.In addition, some LDPC decoders are proposed in References [4-8]. In this study, a low-complexity LDPC decoding method is proposed.The decoder uses the normalized Min-sum algorithm (NMSA).By using a dual-port Block RAM of FPGA chip, a general low-complexity two-channel LDPC decoder is designed.The decoder can support LDPC with multiple bit rates and codes.In the end, the design based on Xilinx xc4vsx35 chip is experimentally verified, and the results are compared with the traditional serial decoders and partial parallel decoders.The results show that the hardware resource consumption of the LDPC decoder designed by the proposed method is obviously less than that of traditional serial and partial parallel decoders.Therefore, the proposed decoder structure has certain engineering practical value. NMSA NMSA introduced by Chen and Fossorier [9] is a simplified version of Log-BP algorithm, and the details are as follows. 1) Initialization For AWGN channels, the variable node information can be initialized to 0 ( ) ( ) where i y is the soft received value. 2) Check node update(CNU) The CNU at the lth iteration can be expressed as follows: where h is a correction factor in the range of (0, 1), , and 3) Variable node update(VNU) The VNU at the lth iteration can be expressed as follows: 4) Decision After completing each iteration of the VNU, a decision is made on ( ) holds or the maximum iteration number is reached, the decoding process ends; otherwise, the decoding continues from Step 2. FPGA-based Decoder Design An LDPC decoder is designed for reducing the realization complexity of the decoder and decreasing the hardware resource consumption of FPGA.In the design process, the decoder parallelism and the number of quantization bits are reduced.Also, the decoder structure is optimized.In the present study, a two-channel parallel decoding method is applied to completely utilize the dual-port Block RAM of the FPGA chip, and an LDPC decoder with low implementation complexity and low resource consumption is designed.The 6-bit quantization is used achieve a balance between storage requirements and performance. A. Low-complexity Structure of LDPC Decoder The proposed realization structure of LDPC decoder is shown in Fig. 1. Figure 1. Realization structure of LDPC decoder The LDPC decoder mainly consists of four parts: CNU module, VNU module, intermediate variable storage module, and memory read/write address control module.The intermediate variable storage module includes an initial log-likelihood ratio RAM (LLR_RAM), a check update result memory (Check_RAM), and a variable update result memory (Var_RAM); each of these memories is implemented using a dual-port Block RAM.The memory read/write address control module includes a CNU address control (Check_addr) and a VNU address control (Var_addr). In the initialization process, LLR value is written to LLR_RAM and Var_RAM.The write address generated by the Var_addr module.When the check node is updating, the data are read from Var_RAM according to the address generated by the Check_addr; these data are then sent to the CNU module.The outputs of the CNU are written to the Check_RAM, and the write address is controlled by the Check_addr.When the variable node is updating, it first reads the data from the LLR_RAM and the Check_RAM according to the address generated by the Var_addr; these data are then sent to the VNU module.After the variable node is updated, a decision can be made.If the maximum number of iterations is reached, then the output of the decision will be the decoding result; otherwise, the update result is written to the Var_RAM and the next iteration is performed.Figure 2 shows the overall operation timing sequence of the LDPC decoder, and the read/write state of the three memories in the decoding process.The read/write of the two memories can use the same address generation module, thus simplifying the logic and reducing hardware resource consumption. B. CNU Module This module includes absolute value comparison, multiplicative correction, and some other operations.The realization structure of CNU is shown in Figure 3.The CNU adopts a two-channel parallel way.First, the two inputs din1 and din2 of the module are separated by the modulus value, and the corresponding sign bit sign1, sign2 and absolute value beta1, beta2 are obtained.After caching the sign bit, the update can be conducted by a simple XOR operation using Equ.(2).Therefore, the minimum value must be min1 or min_temp.The operation of the second minimum value only requires the comparison of min1 and smin_temp, as well as smin1 and min_temp.If the minimum value is min1, then the second minimum must be smin1 or min-temp.If the minimum value is min_temp, then the second minimum must be min1 or smin_temp.In this way, the comparison operation of the minimum and second minimum values can be completed using four two-input comparators.As the correction coefficient is always set to 0.8, the multiplicative operation can be finished using bitwise shift operation and subtraction approximatively. For a ( , ) n k LDPC code with a maximum row weight of a , the time required to complete the CNU is operating clocks; this time is less than half the time of the serial operation. C. VNU Module The initial LLR value and outputs of the CNU module are inputs for VNU module.Fig. 4 shows the implementation structure of VNU module.dinc represents the initial LLR value (i.e., L p L r - operation of the variable update can be completed; consequently, the variable update result can be obtained.bit_dout is the sign bit of bit_sum and is also the result of the decision output after completing an iterative decoding.operating clocks; the time is less than half the time of that in a serial decoder. D. FPGA Realization Result of LDPC Decoder In this study, we use the irregular quasi-cyclic (4096, 2048) LDPC code as an example [10][11] .The maximum number of iterations of the decoder is set to 25. 1 shows the resource consumption of the FPGA implementation with different decoder structures under the given conditions.The proposed decoding implementation structure uses only 555 Slice logic resources and 28 Block RAM memories to achieve the realization of a (4096, 2048) LDPC decoder.The hardware resource consumption of the proposed decoder structure is significantly less than that of traditional partial parallel and serial decoding structures. Conclusion A hardware implementation structure of decoder with low complexity is proposed to reduce the hardware resource consumption of LDPC decoder and address the limitation of the payload resource in satellite communication system.Accordingly, a (4096, 2048) LDPC decoder is designed based on Xilinx Vertex-4 xc4vsx35 chip.The hardware resource consumption of this decoder is much less than that of traditional parallel and serial decoding structures.The FPGA implementation results show that the proposed decoder structure can effectively reduce the hardware resource consumption and has certain engineering application value. Figure 2 . Figure 2. Operation timing sequence of the decoder The decoding delay can be expressed as follows: ( ) _ initial CNU VNU T T T T iter time     , (5) where initial T is the time allotted for initialization; CNU T and VNU T are the time required to update the check node and the variable node in one iteration, respectively; iter_time is the iteration number. Figure 3 . Figure 3. Check node update module structure During the absolute value comparison operation, beta1 and beta2 are first compared after the modulus value separation; the minimum absolute value min_temp, minimum position pos_tmp, and second minimum value smin_temp are also cached.Thereafter, for each pair of absolute value input, a minimum value and a second minimum value are selected from min1, smin1, min_temp and smin_temp for those four values because the two sets of data satisfy min1 smin1 min_temp smin_temp   . and dinb represent the check update result (i.e.,( ) ji L r ).During the updating process, dinc, dina, and dinb are added on the basis of the column weight and bit_sum is the summation.Adding bit_sum with -dina and -dinb, the 1 ( ) ( ) Figure 4 . Figure 4. Variable node update structure For an LDPC code with a maximum col weight of b, the time required to complete the variable node update is 2 b n     
v3-fos-license
2018-12-11T15:46:55.238Z
2003-06-01T00:00:00.000
55022508
{ "extfieldsofstudy": [ "Materials Science" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.scielo.br/j/mr/a/DXyCXHc4xnXQ8v6VGnXgbBP/?format=pdf&lang=en", "pdf_hash": "44f01aceae56d1ab448e1fe03b88d61eba8aa26c", "pdf_src": "Anansi", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:163", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "sha1": "44f01aceae56d1ab448e1fe03b88d61eba8aa26c", "year": 2003 }
pes2o/s2orc
BENDING CHARACTERISTICS OF RESIN CONCRETES In this research work the bending characteristics of thermoset polymer concretes were analyzed. Various mixtures of resin and aggregates were considered, in view of an optimal combination. The Taguchi methodology was applied, in order to reduce the number of tests and in order to evaluate the influence of various parameters in concrete properties. This methodology is very useful for the planning of experiments. INTRODUCTION Polymer concrete is a kind of concrete in which a thermoset resin binds together natural aggregates, such as silica sand. Catalysts and accelerators are added up to resin before mixing with inorganic aggregates, in order to initiate the polymeric curing. In this type of concretes, water is completely absent, as it inhibits the curing of concrete [1]. Therefore, unlike cement concretes, this is a water-free concrete. Typical resins used in these concretes are polyester, epoxy or acrylic thermoset resins. Polyesters are the most used mainly for economic reasons. Resin concretes have good mechanical properties, such as high compression strength, high durability in terms of fatigue and in terms of corrosion resistance. Its permeability to liquids is generally very low and its curing times are quite fast. The industrial applications of polymer concretes are growing steadily, particularly in the area of precast concrete elements, such as façade panels [2]. The good mechanical Introduction Polymer concrete is a kind of concrete in which a thermoset resin binds together natural aggregates, such as silica sand.Catalysts and accelerators are added up to resin before mixing with inorganic aggregates, in order to initiate the polymeric curing. In this type of concretes, water is completely absent, as it inhibits the curing of concrete 1 .Therefore, unlike cement concretes, this is a water-free concrete. Typical resins used in these concretes are polyester, epoxy and acrylic thermoset resins.Polyesters are the most used, mainly for economic reasons. Resin concretes have good mechanical properties, such as high compression strength, and high durability in terms of fatigue and corrosion resistance.Its permeability to liquids is generally very low, and its curing times are quite fast. The industrial applications of polymer concretes are growing steadily, particularly in the area of precast concrete elements, such as façade panels 2 .The good mechani- Planning of testing formulations To analyze the influence of various parameters in the concrete bending strength, the following material factors are considered: • Resin type; • Resin content (%); • Charge content in resin (%); • Sand type; • Curing cycle.The variation levels for each considered factor are specified in Table 1. The number of all combinations between these five factors with two variation levels is 2-raised to five, or either thirty-two possible combinations.However, the proposed Taguchi method allows a reduction in the number of combinations to test, by the use of pre-set orthogonal array with sixteen lines, which corresponded to the different formulations to test. Using this array, not only the influence of each factor can be evaluated, but also the interactions between itself (it has fifteen degrees of freedom: five corresponding to factors and ten corresponding to its interactions). The 16 resultant formulations are presented in Table 2. Materials characterization A pre-accelerated orthophtalic polyester resin (NESTE-S226E) was used, with 2% (in weight) catalyst.Also used, was a low viscosity epoxy resin (EPOSIL-551), with a maximum bending strength of 70 MPa.This resin was mixed with a hardener on a 2/3 resin, 1/3 hardener ratio. The charge incorporated in this resin was calcium carbonate.The weight percentages are related to the total weight of resin. The foundry sand used was a siliceous one, of rather uniform particles size, with an average diameter of 245 µm.Clean sand is a locally available river sand, previously washed and well graded (0.01/1.20 mm).The water content of both sands is controlled to be less than 0.1%, before being mixed with the resin. The mixture was performed mechanically, to achieve a more homogeneous material.For each formulation, nine specimens were manufactured according to RILEM standards 12 .The metallic moulds are of standard type. Experimental set-up Specimens were tested in three-point bending, after curing.An Instron universal testing machine, with a load cell of 100 kN, was used (Fig. 1).Tests were performed accord- 13 , at a rate of 1 mm/min.Loaddisplacement curves and maximum load corresponding to the collapse bending moment, were recorded.Testing set-up and sample geometry is defined in Fig. 2. Bending tests Table 3 presents the average failure load and failure stress for each formulation.In Fig. 3, is presented the typical loaddisplacement curves obtained in the bending tests of concrete specimens.These particular curves are referring to concrete formulation nº 4. The load-displacement curves for each formulation are quite similar, and the maximum load values are also very close, which is a good indication of material behaviour prediction. Variance analysis by the Taguchi method In the application of the Taguchi method, the variance analysis -ANOVA -was used, in order to analyze data obtained by the chosen orthogonal matrix.ANOVA allows the testing of the significance of the effects relatively to the random error [7][8][9][10][11] .The analysis was performed for a signifi-cance level of 5%, or for a confidence level of 95%.The variance analysis results, for bending strength response, are shown in Table 4. The last column of Table 4 represents the contributions of each factor (or interaction) to the global variance.These contributions are function of the numeric values of the effects and they indicate the level of influence in the global response, which is the bending resistance. The numeric value of an effect (E A ), or principal effect, is not more than the difference between the average values obtained on the two levels adopted for that factor.In the calculation of each these averages, all response values where that factor interacts with the level in question are considered. where A i represents the average value corresponding to all responses involving factor A with level "i".The determination of the interaction effect value between two factors (IE (A*B) ) is not so straightforward and it involves the calculus of four different average values: where A i B j represents the average value corresponding to responses involving factor A with level "i" and factor B with level "j". Response graphics Response graphics allow the evaluation of the relative importance of each factor, or interaction, in a much easier way than the numeric values of effects. For principal effects, the interpretation of graphics is straightforward.Figures 4 to 8 present the response graphics of the principal effects, with 95% confidence error bars.Each graphic point represents the average response for the factor, at a certain level.The numeric value of the effect is precisely the difference between the two points: the higher the difference, the higher the influence of the factor. To analyze the response graphics of interaction effects, principal effects of factors must be ignored and attention must be focused in its interaction. The interaction is graphically defined by the parallelism between two straight lines: the smaller the parallelism, the bigger the interaction.The interaction effects response graphics are represented in Figs 9-18, with 95% confidence error bars. Variance analysis results From Table 4, it can be concluded that resin type is the most influencing factor, followed by resin content and charge content.The interaction between the latter and sand type also presents a significant contribution to global response. • Resin type 49.66% • Resin content 13.69% • Charge content 8.33% • Interaction Charge content * Sand type 8.43% The remaining factors and interactions have a very small influence in the total variation.In particular, sand type and curing conditions factors, and its interactions, have been rejected for a significance level of 5%. Principal effects analysis From the analysis of the response graphics it is evident that the use of epoxy resin strongly increases bending strength, when compared to polyester resin. For epoxy resin values are not only higher, but also more reliable, due to more uniform results (Fig. 4). Responses are also increased, at a lower level, with the use of 20% resin content, as expected and with 0% charge content (Fig. 5 and 6).Sand type and curing treatment, when isolated, have almost no influence in concrete bending strength, as one can observe from the small slope in corresponding graphics (Fig. 7 and 8). Interactions effects analysis Analyzing the interaction effects, according to Fig. 9-18, the following conclusions can be drawn by interaction, in descending order of importance: • Interaction between Charge content and Sand type: (8.43%) Foundry sand concretes are very sensitive to the incorporation of charge in resin formulation.The best mechanical results are obtained for 0% charge content.Clean sand concretes are less sensitive, and can be considered as insensitive to variation of charge content (the two average points are coincident) (Fig. 9). This difference in behaviour is explained by each sand particle's size distribution.Foundry sand, with very fine grain size, has a large specific surface and it reacts poorly to the incorporation of more fine particles.Clean sand, with a smaller specific surface, requires less binder, and therefore, the concretes made with this kind of sand and with charge, balance the lower effective resin content with a better filling of holes through the incorporation of calcium carbonate. • Interaction between Resin type and Charge content: (4.35%) The incorporation of charge in polyester concretes has a rather negative influence in global response.This effect is not so pronounced in epoxy resins.An average reduction of 4% in bending strength for epoxy concrete and a 33% reduction for polyester concrete (Fig. 10) exists.This phenomenon can perhaps be explained by the higher viscosity of polyester resin and the corresponding lower wetting ability, being therefore more susceptible to the incorporation of fine particles. • Interaction between Resin type and Sand type: (3.82%) Epoxy resin concretes present better bending behaviour when manufactured with foundry sands.However, polyester concretes present better mechanical behaviour when in- corporating clean sands (Fig. 11).This can be explained by the higher capacity of epoxy resin to wet aggregates, allowing finer sands, while polyester resin, with less wetting ability, prefers aggregates with lower specific surface. • Interaction Resin type and Resin content: (3.14%) Polyester concretes are more susceptible to resin content than epoxy concretes.An increase of 17% to 20% in resin content increases response in about 56%, for polyester resin against an increase in 10% for epoxy resin (Fig. 12). • Interaction Resin content and Sand type: (2.17%) By incorporating higher resin content (20%) better results are obtained when foundry sands are used, while when smaller resin content (17%) is used, better response is obtained with clean sands (Fig. 13).Lower resin contents require sands with lower specific surfaces, so that it may be possible to involve all material.When there is an excess of resin, this resin tends to migrate to the surface, reducing homogeneity. • Interaction Resin content and Charge content: (1.99%) Global response values increase much more in samples with 25% charge than in those with 0% charge (Fig. 14).This reaction is due to the fact that charge incorporation corresponds to an increase in specific surface of the totality of particles. The interaction of factors (resin content), (charge content), (resin type) and (sand type) with factor (curing treatment) have no significant influence in the global response values.This can be observed in the almost perfect parallelism of the two straight lines in corresponding response graphics (Figs.15-18). Conclusions The objective of this research was to analyze the influence of compositions and curing treatment in bending strength of polyester and epoxy concretes towards an optimal formulation. For that purpose, a number of concrete formulations with various curing treatments were manufactured and tested in bending.Load-displacement curves and failure loads were recorded. Planning of tests and the evaluation of factor effects and its interaction effects was performed with the Taguchi method, in order to reduce the total number of formulations to be tested. A variance analysis -ANOVA -was used for data analysis.The following conclusions can be drawn: • The most decisive factor for bending strength is the resin type, followed by resin content and charge content.• Curing cycle does not influence the concrete final characteristics.Seven days cure, at room temperature, is equivalent to three hours cure, at 80 °C.• The analysis of principal effects and its interactions allowed establishing the most interesting levels for each factor.The optimal combination, corresponding to the most resistant concrete was found.It is composed by: Epoxy resin; 20% Resin content; 0% Charge content; Foundry sand; This optimal combination was really tested.It corresponds to concrete nº 12, which has an average bending strength of 38 MPa.Therefore, there was no need to predict the bending strength of concrete corresponding to the optimal formulation by the Taguchi method, or to perform the confirmation test. Obviously, this combination is not the most economical one, attending to higher cost of epoxy resin and also the resin content involved.The best relation between price and per-formance was presented by combination nº 4. It is similar to nº 12, but with polyester resin instead of epoxy resin.This formulation is almost five times cheaper, and its bending strength is only 17% smaller than the optimal combination. A compromise solution must be sought according to specific concrete application specifications. Based on this research, a new study accounting for an intermediate level for resin content, is sought to be helpful in the choice of that solution. Figure 2 . Figure 2. Test scheme and sample geometry. Table 1 . Factors and levels. Table 2 . Formulations to be tested. Table 3 . Average failure loads and stresses.
v3-fos-license
2023-02-12T16:11:54.810Z
2023-02-01T00:00:00.000
256804533
{ "extfieldsofstudy": [ "Medicine" ], "oa_license": "CCBY", "oa_status": "GOLD", "oa_url": "https://www.mdpi.com/2077-0383/12/4/1420/pdf?version=1676025429", "pdf_hash": "42bd4d79d5191835cd257c3b23e6389fb4cf5532", "pdf_src": "PubMedCentral", "provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:164", "s2fieldsofstudy": [ "Medicine" ], "sha1": "4ac9be45c52b1c17ef2880810c78183fb88f33b9", "year": 2023 }
pes2o/s2orc
The Distribution of Autoantibodies by Age Group in Children with Type 1 Diabetes versus Type 2 Diabetes in Southern Vietnam Asian children are increasingly being diagnosed with type 1 diabetes (T1D) or type 2 diabetes (T2D), and the presence of coexisting islet autoimmune antibodies complicate diagnosis. Here, we aimed to determine the prevalence of islet cell autoantibodies (ICAs) and glutamic acid decarboxylase 65 autoantibodies (GADAs) in children with T1D versus T2D living in Vietnam. This cross-sectional study included 145 pediatric patients aged 10.3 ± 3.6 years, with 53.1% and 46.9% having T1D and T2D, respectively. ICAs were reported in only 3.9% of pediatric T1Ds, which was not significantly different from the 1.5% of those with T2D. Older children with T1D were positive for either ICAs, or ICAs and GADAs (5–9 and 10–15 years), whereas only a small proportion of children aged 0–4 years were positive for GADAs (18%). Notably, 27.9% of children with T2D aged 10–15 were positive for GADAs, and all were classified as overweight (n = 9) or obese (n = 10). GADAs were more commonly observed in T1D patients younger than four years than ICAs, which were more prevalent in older children (5–15 years). Even though few children with T2D carried ICAs and GADAs, finding a better biomarker or an appropriate time to confirm diabetes type may require further investigation. Introduction The problem of diabetes in children poses significant public health implications due to the serious complications it can cause in later life as a result of either chronic insufficient insulin secretion or impaired insulin action in later years [1]. According to the latest ADA 2021 classification, diabetes in children is classified into type 1 diabetes (T1D)-with a known mechanism underlying the appearance of islet cell autoantibodies-and type 2 of 10 2 diabetes (T2D), characterized by a progressive loss of adequate β-cell insulin secretion, frequently with a background of insulin resistance [1]. Worldwide, the incidence and prevalence of T1D and T2D among children is increasing-especially among the Asian population [2][3][4]. The early onset of diabetes means a more prolonged lifetime exposure to hyperglycemia, which is highly associated with adverse outcomes in adulthood [5]. Remarkably, children with T2D are prone to develop earlier and more severe microvascular and cardiovascular disease than those with T1D [6][7][8]. Therefore, it is essential to classify individuals with T1D or T2D at the time of diagnosis to determine appropriate and effective therapies [9]. As mentioned, T1D pathogenesis has been linked to a state of β-cell destruction in the presence of autoantibodies, which are considered an important factor in distinguishing the various types of diabetes (autoimmune-T1A or idiopathic-T1B, or T2D). Among the wellknown T1D-linked autoimmune markers, islet cell autoantibodies (ICAs) and glutamic acid decarboxylase 65 autoantibodies (GADAs) are widely used [10,11]. Importantly, the factors that trigger the autoimmune phenomenon in children with a genetic susceptibility to T1D remain unknown. Evidence linking the lower ICA frequency of black patients compared to their white counterparts [12] or Caucasians [13] is partially explained by ethnicity and genetics, along with their environmental interactions. Interestingly, the presence of autoimmune markers among Asian children is lower than in non-Asians [14], as there are still several T1B subtypes characterized by the absence of insulitis and diabetes-related antibodies [15]. In Vietnam, a previous publication revealed that a high number of T1D cases presenting with diabetic ketoacidosis in young adults are negative for pancreatic ICAs. Additionally, although the authors reported a low prevalence of T1D and T2D among children aged 11-14 years (1.04), there has been a lack of prevalence rates published for T1D and T2D in younger children and descriptions of autoimmune markers [16]. The lower frequency of ICAs in children with T1D onset before five years of age may be due to a more rapid disappearance of islet cell antigens than in patients with a later onset [13]. Due to there being little information on autoantibodies in this racial group so far to our knowledge, this study aimed to investigate the existence of autoantibodies in T1D and T2D among younger Vietnamese children aged 1-15. We also examined the prevalence of positive autoimmune makers stratified by age group. Based on the findings of this study, we will be able to gain a deeper understanding of the epidemiology of diabetes in Vietnam and the etiology of diabetes autoantibodies in Asian populations. Study Population This study was a retrospective cross-sectional investigation describing cases from medical records gathered at Children's Hospital 2 over five years (2015-2020). The inclusion criteria for the 145 pediatric patients were as follows: (a) Patients aged from 6 months to 15 years; (b) Were diagnosed as having diabetes mellitus for the first time upon being admitted to the hospital, using the 2019 Classification and Diagnosis of Diabetes, with either HbA1c > 6.5%, fasting blood glucose > 126 mg/dL, or random blood glucose >200 mg/dL. Thereafter, patients were divided into two groups: (1) T1D, referred to as children whose diabetic signs and symptoms occurred early in life, had no family history of diabetes, were not overweight or obese, and had a low fasting C-peptide (<1.1 ng/mL), and (2) T2D patients, who developed at pubertal age, were overweight or obese, and had high C-peptide levels (>1.1 ng/mL) or high fasting insulin (>2.6 mcg/mL). After diagnosis and treatment, we repeatedly assessed serum C-peptide levels one month later to ensure that glucose toxicity would not lead to an underestimation of C-peptide levels. The T1D patients were prescribed insulin. In contrast, T2D patients were given Metformin (if HbA1C < 8.5%), or insulin (if HbA1C ≥ 8.5%) in the short term (2-6 weeks) before transitioning to Metformin. In addition, we checked responses to treatment after six months to confirm the diagnosis. All recruited patients were measured for pancreatic islet autoantibodies, such as ICA and GADA and thyroid autoantibodies. The study excluded any participants meeting any of the following criteria: (c) Medical records could not be collected; (d) Neonatal diabetes or monogenic diabetes; (e) Post-transplant diabetes mellitus; (f) Non-T1D and non-T2D. Figure 1 illustrates the flowchart for the study recruitment ( Figure 1). patients were prescribed insulin. In contrast, T2D patients were given Metformin (if HbA1C < 8.5%), or insulin (if HbA1C ≥ 8.5%) in the short term (2-6 weeks) before transitioning to Metformin. In addition, we checked responses to treatment after six months to confirm the diagnosis. All recruited patients were measured for pancreatic islet autoantibodies, such as ICA and GADA and thyroid autoantibodies. The study excluded any participants meeting any of the following criteria: (c) Medical records could not be collected; (d) Neonatal diabetes or monogenic diabetes; (e) Post-transplant diabetes mellitus; (f) Non-T1D and non-T2D. Figure 1 illustrates the flowchart for the study recruitment ( Figure 1). Data Collection We collected demographic and clinical information, including age at diagnosis, gender, diabetic family history, insulin usage, diabetes-related symptoms, polydipsia, polyuria, polyphagia, unintentional weight loss, diabetes ketoacidosis (DKA), and acanthosis nigricans from parents, caregivers, and hospital personnel. In order to measure body weight and height, participants were instructed to wear light clothing and be barefoot, and then we calculated their body mass index (BMI) using the following formula: BMI = (Bodyweight)/(height 2 ) in kilograms per square meter. Obesity was defined, based on the ages of the children, according to the median WHO Child Growth Standards median [17]. Children under five years of age were defined as overweight when weight-forlength/height or BMI-for-age > 2 standard deviations (SD) and ≤ 3 SD and as obese when weight-for-length/height or BMI-for-age > 3 SD. Children aged 5-19 years were determined as overweight when BMI-for-age > 1 SD and obese when BMI-for-age > 2 SD [17]. A serum sample from a patient was also used to assess the antibody assay methodology for islet cell autoantibodies (ICAs), glutamic acid decarboxylase antibodies (GADAs), thyroperoxidase antibodies (anti-TPOs), thyroglobulin antibodies (anti-TGs), and thyrotrophin receptor antibodies (TRAbs), which were stored in aliquots at -75°C until the analysis was performed. Autoantibodies for pancreatic islet-cells consisting of ICAs and GADAs were detected using a qualitative enzyme-linked immunosorbent assay (ELISA; ImmunomatTM, DGR Instruments GmbH, Germany). Samples with ratio values of 0.95 U/mL or less showed a low level of ICAs (negative result), and samples between 0.95 and 1.05 U/mL were considered indeterminate (borderline). Positive results (high levels of ICAs) were determined by ratio values greater than 1.05 U/mL. Regarding GADA detection levels, a GADA ratio of less than 1 U/mL indicates a low antibody level (negative Data Collection We collected demographic and clinical information, including age at diagnosis, gender, diabetic family history, insulin usage, diabetes-related symptoms, polydipsia, polyuria, polyphagia, unintentional weight loss, diabetes ketoacidosis (DKA), and acanthosis nigricans from parents, caregivers, and hospital personnel. In order to measure body weight and height, participants were instructed to wear light clothing and be barefoot, and then we calculated their body mass index (BMI) using the following formula: BMI = (Bodyweight)/(height 2 ) in kilograms per square meter. Obesity was defined, based on the ages of the children, according to the median WHO Child Growth Standards median [17]. Children under five years of age were defined as overweight when weight-for-length/height or BMI-forage > 2 standard deviations (SD) and ≤ 3 SD and as obese when weight-for-length/height or BMI-for-age > 3 SD. Children aged 5-19 years were determined as overweight when BMI-for-age > 1 SD and obese when BMI-for-age > 2 SD [17]. A serum sample from a patient was also used to assess the antibody assay methodology for islet cell autoantibodies (ICAs), glutamic acid decarboxylase antibodies (GADAs), thyroperoxidase antibodies (anti-TPOs), thyroglobulin antibodies (anti-TGs), and thyrotrophin receptor antibodies (TRAbs), which were stored in aliquots at −75 • C until the analysis was performed. Autoantibodies for pancreatic islet-cells consisting of ICAs and GADAs were detected using a qualitative enzyme-linked immunosorbent assay (ELISA; ImmunomatTM, DGR Instruments GmbH, Germany). Samples with ratio values of 0.95 U/mL or less showed a low level of ICAs (negative result), and samples between 0.95 and 1.05 U/mL were considered indeterminate (borderline). Positive results (high levels of ICAs) were determined by ratio values greater than 1.05 U/mL. Regarding GADA detection levels, a GADA ratio of less than 1 U/mL indicates a low antibody level (negative result), while an antibody ratio of 1 to 1.05 demonstrates a borderline level. Ratio values greater than 1.05 U/mL indicate a positive result for GADAs. Statistical Analysis Data were presented as percentages for the categorical variables and mean ± SD for the continuous variables. To estimate the 95% confidence intervals (CIs) of T1D and T2D prevalence rates, the 25th value of the ranked difference, as well as the 95th value of the ranked difference, can be used for 1000 bootstrap resamples of the mean difference. Pearson's chi-squared test was used to determine the differences between categorical variables among T1D and T2D subjects. On the other hand, the independent Student's t-test was used to compare the means of continuous data between T1D and T2D subjects. Statistics could be considered significant if there was a p-value below 0.05. As part of the statistical analysis, we used R Statistics software (version 3.6.3, R core team, 2020) to perform the analysis. Demographic Characteristics of the Study Population with T1D and T2D Of 145 diabetic patients aged 10.30 ± 3.64 years (ranging from 1-15 years) and with a BMI of 19.45 ± 10.62 kg/m 2 , T1D and T2D accounted for 53.1% (95% CI 44.8-60.7%) and 46.9% (95% CI 39.3-55.2%), respectively. The study population was dominated by female subjects (female:male ratio = 1.4:1.0); however, no statistical differences in gender were noticed. A significant difference was observed in the distribution of the age groups between children with T1D and those with T2D (p < 0.001). In further detail, there was a high prevalence of T1D among young children-particularly those under the age of nine years (65.9%). By contrast, most of the subjects diagnosed with T2D were older children, with an age range between 10 and 15 years old (91.2%). There were significant differences in the family history of diabetes between children with T1D and T2D. Notably, the highest rates belonged to those with second-degree relatives with diabetes in both T1D and T2D children (Table 1). Clinical Characteristics of the Study Population with T1D and T2D Typically, children who have been diagnosed with T1D or T2D present with polyuria and unintended weight loss, which have been found to be significantly more prevalent in the T1D population. In addition, children with T1D appeared to have a significantly higher rate of insulin use, DKA, and urine ketone incidences than those with T2D at admission time (26% vs. 0%, 26% vs. 0%, and 93.5% vs. 27.9%, respectively) in this study, whereas subjects with T2D presented with significantly higher levels of obesity and being overweight, as well as signs of insulin resistance (acanthosis nigricans) when compared to those with T1D (98.5% vs. 3.9% and 48.5% vs. 0%, respectively; Table 2). In terms of the blood test, it was found that T1D patients had significantly higher concentrations of blood glucose compared to those who had T2D (419.69 ± 167.50 vs. 280.35 ± 125.47 mg/dL, p < 0.001). T1D patients, on the other hand, had significantly lower levels of fasting insulin (2.90 ± 1.63 vs. 28.73 ± 27.25 µU/mL, p < 0.001) and Cpeptide levels (0.42 ± 0.27 vs. 3.17 ± 2.16 ng/mL, p < 0.001) compared to those with T2D. No significant difference in HbA1c values between children with T1D and T2D was observed ( Table 2). Prevalence of Autoantibodies among Children with T1D and T2D There was a notable finding in this study in that only 3.9% (n = 3) of 77 patients with T1D had positive ICAs, while 96.1% (n = 74) had negative ICA tests. At the same time, 79.2% of patients with T1D presented with positive GADA tests-significantly higher than those with T2D (79.2% vs. 29.4%, p < 0.001). Additionally, we observed that patients with T2D had a limited number of positive ICA tests (1.5%). In terms of the combination of ICA and GADA status, we found that patients with T1D had a significantly higher rate of positive ICA and positive GADA tests than those with T2D (3.9% vs. 1.5%, p < 0.001). Similarly, concerning negative ICA tests, patients with T1D had a higher proportion of positive GADA tests than those with T2D. Intriguingly, none of the patients was observed to have positive ICA tests and concomitant negative GADA tests-even in T1D patients. As the age groups were significantly different between the T1D and T2D groups (Table 1), we presented the distribution of positive ICA and GADA tests in different age groupsillustrated in Figure 2. Regarding patients with T1D, older ones were positive to either ICAs or ICAs and GADAs (5-9 and 10-15 years), while only a small number of patients aged 0-4 years appeared to be positive for GADAs (18%). By contrast, a limited number of T2D patients with an age of 10-15 years were positive for either ICAs or GADAs. A notable finding was that of the 68 recruited T2D participants, 27.9% of children aged 10-15 years had a positive GADA test (19 out of 68), and all were classified as overweight (n = 9) or obese (n = 10; Figure 2). groups-illustrated in Figure 2. Regarding patients with T1D, older ones were positive to either ICAs or ICAs and GADAs (5-9 and 10-15 years), while only a small number of patients aged 0-4 years appeared to be positive for GADAs (18%). By contrast, a limited number of T2D patients with an age of 10-15 years were positive for either ICAs or GADAs. A notable finding was that of the 68 recruited T2D participants, 27.9% of children aged 10-15 years had a positive GADA test (19 out of 68), and all were classified as overweight (n = 9) or obese (n = 10; Figure 2). The present study revealed that several patients with T2D were positive for TRAb (n = 4) and anti-TPO (n = 1). However, only two patients with T1D presented as positive for anti-TG, but no incidence was found in patients with T2D (Table 3). The present study revealed that several patients with T2D were positive for TRAb (n = 4) and anti-TPO (n = 1). However, only two patients with T1D presented as positive for anti-TG, but no incidence was found in patients with T2D (Table 3). Discussion The present study examined the prevalence and distribution across ages of autoimmune antibodies among children with T1D and T2D living in southern Vietnam. Positive ICA tests were found in 3.9% of T1D patients and 1.5% of T2D patients; on the other hand, positive GADA tests were found in 79.2% of T1D patients and 29.4% of T2D patients. A limited number of patients had both antibodies (ICA-positive and GADA-positive) together, as identified in 3.9% of T1D patients and 1.5% of T2D patients. Only 18.0% of T1D patients aged 0-4 years had positive GADA tests (n = 11), and none had positive ICA tests. In T1D patients aged 5 to 9 years, 33.3% had positive ICA tests, 37.3% had positive GADA tests, and 33.3% had both, respectively. In addition, many more older patients with T1D (aged 10 to 15) were positive for autoimmune markers, including ICAs (66.7%), GADAs (44.3%), and both (66.7%). As opposed to this, patients with T2D who had positive ICA and GADA tests were aged 10 to 15, and none were aged 0-4. Typically, T1D is classified as autoimmune (T1A) or idiopathic (T1B) diabetes. The former type is more common (80-90%) and is caused by the autoimmune destruction of the insulin-producing β-cells in the pancreas, resulting in insulin deficiency [18]. Meanwhile, the latter type is known as islet antibody-negative diabetes, with a profound loss of insulin secretion, and is reported mainly in Asia [14,18]. Similarly, we observed that 96.1% of patients with T1D were negative for ICAs, and so were potentially misclassified as having T1D based on clinical characteristics. In this case, GADAs-markers of the autoimmune nature of T1D persisting over many years-could be recruited. Despite having negative ICA tests, 75.3% of T1D patients had positive GADA tests, classifying these cases as autoimmune [19]. In contrast, it was found that 20.8% of patients with T1D were classified as having T1B as a result of negative ICA and GADA tests. In line with our findings, Libman and colleagues [12] reported that a total of 12% of black patients with T1D did not possess any islet antibodies (ICAs, GADA, and ICA512), compared with 4% of white patients. This difference in the autoantibody prevalence between pediatric black and white patients with T1D might be explained by differences in the onset and/or progression of insulin-dependent diabetes mellitus [12]. According to many works in the literature, there is a wide variation in the prevalence of positive GADA status among different ethnic groups, such as 79% in Germany [20] and Belgium [21], 73% in Taiwan, and 44.3% in Singapore [22]. Increasing levels of GADAs indicate an ongoing immune attack against pancreatic β-cells; however, this autoimmune marker requires a persistent elevation for at least six months following diagnosis [19], which is longer than the period we observed. Therefore, it was essential to follow up and re-examine these individuals carefully. Importantly, we could not find any statistical differences between children who had at least one positive ICA or GADA test and those who presented as all-negative with respect to blood glucose, hemoglobin A1c, insulin, and C-peptide levels at diagnosiseven in the presence of diabetes ketoacidosis (Supplementary Table S1). In contrast to our study, a previous study reported that 60% of T1D patients who were negative for ICAs and GADAs among young adults in Vietnam presented with ketoacidosis without clear evidence of humoral or autoimmune mediators [23]. It should be noted that similar findings have been reported in some other populations from Asia, suggesting that the findings may be generalizable across ethnic groups [24][25][26]. Therefore, further research is necessary to determine the role of the age of onset and the clinical manifestations of T1D in children and adults. Additionally, although other autoimmune diseases related to the thyroid gland have been documented as being associated with T1D [27], only two patients with T1D (2.6%) demonstrated positive thyroglobulin antibody (TGAb), but were negative for thyrotrophin receptor antibodies (TRAb) and TPO antibodies. These were older children who, aged 9-13 years, presented as negative for ICAs, but positive for GADAs. As stated, autoimmune thyroid disease is the most common disorder related to T1D, but its incidence varies significantly in different populations [11,25,26]; therefore, the current population may not be representative-this requires further mechanistic studies. Last but not least, the present study found that patients with T1D were significantly more likely to suffer ketoacidosis than those with T2D, which is a classic approach to ascertaining diabetes type [9]. There is considerable difficulty in distinguishing between diabetes types in children due to the overlap of clinical features such as polyuria, polydipsia, polyphagia, and even ketoacidosis [9]-this is consistent with the results of the current study, except for ketoacidosis. Based on insulin and C-peptide levels, obesity, acanthosis nigricans, and family history of T2D, higher frequencies were found in the T2D group than in the T1D group in the present study. Moreover, the study identified that 30.9% of children and adolescents with T2D with at least one autoimmune antibody against β-cells were possibly diagnosed with double diabetes, whose prevalence rate was consistent with previous studies [11,27,28]. The co-existence of T1D and T2D may increase complications and worsen outcomes, such as microvascular and metabolic disorders associated with T1D and macrovascular disorders associated with T2D [28][29][30]. Apart from the evidence of 27.3% of T2D adult patients having thyroid diseases, derived from a large cohort study, the incidence of thyroid disorders in children with T2D is not fully understood [31]. Notably, the current study showed that five patients with T2D carried thyroid autoantibodies (TRAb and TPO-Ab). Only one of these patients, aged 14 years, presented as positive for GADAs and ICAs, which may indicate hybrid diabetes or poor management of T2D hyperglycemia [29]. Although much more evidence has reported the co-existence of an underlying complex linkage between T2D and thyroid dysfunction [30,32], there has not been clear guidance on how frequently thyroid function should be monitored in patients with T2D-especially in children. Due to these factors, further research is necessary to determine the adverse complications associated with double diabetes and thyroid disease co-existing in children and adolescents, which needs to be addressed to develop an optimal glycemic control treatment regimen. Though this is the first study investigating the existence of antibodies against β-cells in children with T1D and T2D in Vietnam, the present report still has some limitations. Firstly, we had to confront the limitations inherent in a cross-sectional design in obtaining data when patients were admitted to the hospital. Thus, the follow-up plan would be to undertake further monitoring and treatment strategies. Secondly, although ICAs and GADAs are commonly used in clinical studies, it would be more effective to examine other autoimmune markers-including insulinoma-associated antigen-2 (IA-2A), insulin autoantibodies (IAA), and autoantibodies Zinc transporter 8 autoantibodies (ZnT8A)which could be used to predict early autoimmune T1D and to determine the type of diabetes [32]. Finally, the small sample size of the current study is not representative of the entire population; hence, further research should be conducted on children with T1D and T2D throughout the country in different regions. Conclusions In conclusion, the presence of autoimmune antibodies related to diabetes plays an important role in distinguishing diabetes types, which was of great interest in the present study. However, the prevalence of two common autoimmune markers-ICAs and GADAsin children with T1D was not as high as expected, especially for the presence of ICAs. Pediatric patients with T1D had a low prevalence of thyroid autoimmune antibodies, contrary to the concept of the co-existence of autoimmune diseases. According to the age group, T1D patients younger than four years were more likely to have GADAs than ICAs-which were more commonly observed in older children (5-15 years). As far as T2D is concerned, the present study also found that a significant number of patients had ICAs and GADAs. However, only a small number of them were positive for thyroid-related autoantibodies. Supposedly, the evaluation of antibodies against β-cells and the thyroid gland in children who have suspected T1D and T2D should be noted at diagnosis and in the longer term, to better ascertain diabetes type and to identify the appropriate therapy to facilitate individualized care and management. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jcm12041420/s1, Supplementary Table S1: Clinical characteristics of pediatric patients with T1D whose ICAs and GADAs are both negative and at least one positive. Informed Consent Statement: Informed consent forms were signed by the parents on behalf of their children, with additional assent obtained from older children who were able to read and understand the research project.
v3-fos-license